conv3 =>整形=> FC1(720-1> 4096)
[64,3,3,80] => [64720] => [64,4096]
繼代碼執行Conv to FC如上圖所示:
shape = int(np.prod(conv_3.get_shape()[1:]))
conv_3_flat = tf.reshape(conv_3, [-1, shape])
fc1w = tf.Variable(tf.truncated_normal([shape, 4096],dtype=tf.float32,stddev=1e-1), name='weights')
fc1b = tf.Variable(tf.constant(1.0, shape=[4096], dtype=tf.float32),
trainable=True, name='biases')
fc1 = tf.nn.bias_add(tf.matmul(conv_3_flat, fc1w), fc1b)
fc1 = tf.nn.relu(fc1)
希望這會有所幫助。
而且,簡單的模型MNIST(從這裏取:https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py)
def conv_net(x, weights, biases, dropout):
# Reshape input picture
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
謝謝!魔法 –