0
我正在嘗試創建一個用於區分貓和狗的CNN 我從Kaggle中取得的數據 我在平坦化圖層後面臨錯誤。CNN中的尺寸錯誤
模型參數如下:
IMG_SIZE=55
filter_size = 5;
no_of_filters1 = 16;
no_of_filters2 = 32;
no_of_filters3 = 64;
classes=2
x=tf.placeholder(tf.float32,[None,IMG_SIZE,IMG_SIZE,1])
y=tf.placeholder(tf.float32,[None,classes])
w1= weights([filter_size,filter_size,1,no_of_filters1])
w2= weights([filter_size,filter_size,no_of_filters1,no_of_filters2])
w3= weights([filter_size,filter_size,no_of_filters2,no_of_filters3])
wfc=weights([no_of_filters3,625])-ERROR
w_0=weights([625,classes])
我CNN型號:
def model(x,w1,w2,w3,w4,w_o):
#Layer1
layer1= tf.nn.conv2d(x,w1,strides=[1,1,1,1],padding='SAME')
layer1= tf.nn.relu(layer1)
layer1=tf.nn.max_pool(layer1,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
# Layer2
layer2 = tf.nn.conv2d(layer1,w2, strides=[1, 1, 1, 1],
padding='SAME')
layer2 = tf.nn.relu(layer2)
layer2 = tf.nn.max_pool(layer2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME')
layer3 = tf.nn.conv2d(layer2,w3,
strides=[1, 1, 1, 1],
padding='SAME')
layer3 = tf.nn.relu(layer3)
layer3 = tf.nn.max_pool(layer3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],padding='SAME')
layer_shape= layer3.get_shape()
num_features = layer_shape[1:4].num_elements()
fc_layer=tf.reshape(layer3,[-1,num_features])
fc_layer=tf.nn.relu(fc_layer)
ouput_layer= tf.nn.relu(tf.matmul(fc_layer,w4))
logits= tf.matmul(ouput_layer,w_o)
return logits
錯誤被提出的是:
ValueError: Dimensions must be equal, but are 1024 and 64 for 'MatMul' (op: 'MatMul') with input shapes: [?,1024], [64,625].
請親引導我。
爲什麼你把WFC爲625?在放置過濾器時,圖像大小會發生變化。而且這種變化與過濾器尺寸成正比。所以你需要做相關的計算才能獲得合適的體重。 – Beta
@貝塔你能解釋一下我的計算嗎? – vidit02100
選中此項[https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks-Part-2/]。它給出了基於過濾器規格計算尺寸的公式。 – Beta