我已經創建了反映了TensorFlow的MNIST深爲專家所描述的一個教程中here的腳本。TensorFlow張量不正確重塑
但是,當它試圖將尺寸爲[-1,28,28,1]
的x張量重塑爲尺寸[-1,28,28,1]
時,我的腳本很早就返回了一個錯誤。我很困惑的教程並與然而成功同樣的事情,它拋出下面的錯誤對我來說:
ValueError: Cannot feed value of shape (100, 784) for Tensor 'Reshape:0', which has shape '(?, 28, 28, 1)'
完全我的Python腳本是在這裏如下:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
x = tf.placeholder(dtype = tf.float32, shape = [None,784])
y_ = tf.placeholder(dtype = tf.float32, shape = [None, 10])
W1 = tf.Variable(tf.random_normal([5,5,1,32]))
b1 = tf.Variable(tf.random_normal([32]))
這是我懷疑發生錯誤:
x = tf.reshape(x,[-1,28,28,1])
output1 = tf.add(tf.nn.conv2d(x,W1, strides =[1,1,1,1], padding = "SAME"), b1)
output1 = tf.nn.relu(output1)
output1 = tf.nn.max_pool(output1, ksize = [1,2,2,1], strides = [1,2,2,1], padding = "SAME")
W2 = tf.Variable(tf.random_normal([5,5,32,64]))
b2 = tf.Variable(tf.random_normal([64]))
output2 = tf.add(tf.nn.conv2d(output1,W2, strides = [1,1,1,1], padding = "SAME"), b2)
output2 = tf.nn.relu(output2)
output2 = tf.nn.max_pool(output2, ksize = [1,2,2,1], strides = [1,2,2,1], padding = "SAME")
output2 = tf.reshape(output2, [-1, 7*7*64])
W_fc = tf.Variable(tf.random_normal([7*7*64,1024]))
b_fc = tf.Variable(tf.random_normal([1024]))
output3 = tf.add(tf.matmul(output2,W_fc), b_fc)
output3 = tf.nn.relu(output3)
output3 = tf.nn.dropout(output3, keep_prob = 0.85)
W_final = tf.Variable(tf.random_normal([1024,10]))
b_final = tf.Variable(tf.random_normal([10]))
predictions = tf.add(tf.matmul(output3,W_final), b_final)
cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(labels = y_ ,logits = predictions))
optimiser = tf.train.AdamOptimizer(1e-4).minimize(cost)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for i in range(7000):
batchx_s,batchy_s = mnist.train.next_batch(100)
sess.run(optimiser, feed_dict = {x:batchx_s, y_:batchy_s})
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
optimiser.run(feed_dict={x: batch[0], y_: batch[1]})
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
。
print(sess.run(accuracy, feed_dict={x: mnist.test.images,y_: mnist.test.labels}))
你能告訴在那裏你打電話給會議的'run'方法的代碼,包括'feed_dict'參數? – jdehesa