2016-04-26 60 views
0

我正在使用張量流在MNIST數據庫上運行卷積神經網絡。但是我收到以下錯誤。CNN張量流程中的佔位符丟失錯誤

tensorflow.python.framework.errors.InvalidArgumentError:必須以D型浮子 [[節點飼料爲佔位符張量的 'x' 的值:X = Placeholderdtype = DT_FLOAT,形狀= [],_device =「/工作:本地主機/複製:0 /任務:0/CPU:0" ]]

X = tf.placeholder(tf.float32,[無,784],名稱= 'X')#MNIST數據的圖像形狀28 * 28 = 784

我想我正確地更新x的值使用feed_dict,但它說我沒有更新佔位符x的值。

另外,在我的代碼中是否還有其他邏輯缺陷?

任何幫助將不勝感激。謝謝。

import tensorflow as tf 
import numpy 
from tensorflow.examples.tutorials.mnist import input_data 

def conv2d(x, W): 
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') 

def max_pool_2x2(x): 
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], 
         strides=[1, 2, 2, 1], padding='SAME') 

def weight_variable(shape): 
    initial = tf.truncated_normal(shape, stddev=0.1) 
    return tf.Variable(initial) 

def bias_variable(shape): 
    initial = tf.constant(0.1, shape=shape) 
    return tf.Variable(initial) 


mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) 

# Parameters 
learning_rate = 0.01 
training_epochs = 10 
batch_size = 100 
display_step = 1 

# tf Graph Input 
#x = tf.placeholder(tf.float32, [50, 784], name='x') # mnist data image of shape 28*28=784 
#y = tf.placeholder(tf.float32, [50, 10], name='y') # 0-9 digits recognition => 10 classes 

# Set model weights 
W = tf.Variable(tf.zeros([784, 10]), name="weights") 
b = tf.Variable(tf.zeros([10]), name="bias") 

W_conv1 = weight_variable([5, 5, 1, 32]) 
b_conv1 = bias_variable([32]) 


W_conv2 = weight_variable([5, 5, 32, 64]) 
b_conv2 = bias_variable([64]) 


W_fc1 = weight_variable([7 * 7 * 64, 1024]) 
b_fc1 = bias_variable([1024]) 

W_fc2 = weight_variable([1024, 10]) 
b_fc2 = bias_variable([10]) 

# Initializing the variables 
init = tf.initialize_all_variables() 

with tf.Session() as sess: 
    sess.run(init) 


    # Training cycle 
    for i in range(1000): 
     print i 
     batch_xs, batch_ys = mnist.train.next_batch(50) 

     x_image = tf.reshape(x, [-1,28,28,1]) 

     h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) 
     h_pool1 = max_pool_2x2(h_conv1) 

     h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) 
     h_pool2 = max_pool_2x2(h_conv2) 

     h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) 
     h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) 


     y_conv=tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2) 

     cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_conv), reduction_indices=[1])) 
     sess.run(
      [cross_entropy, y_conv], 
      feed_dict={x: batch_xs, y: batch_ys}) 

     correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1)) 
     print correct_prediction.eval() 
     accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
+0

爲什麼評論佔位符? #x = tf.placeholder(tf.float32,[50,784],name ='x') #y = tf.placeholder(tf.float32,[50,10],name ='y') 你哪一行得到錯誤? –

回答

1

爲什麼你想創建佔位符變量?您應該能夠直接使用由mnist.train.next_batch(50)生成的輸出,前提是您在模型本身內移動correct_prediction和accuracy的計算。

batch_xs, batch_ys = mnist.train.next_batch(50) x_image = tf.reshape(batch_xs, [-1,28,28,1]) ... cross_entropy = tf.reduce_mean(-tf.reduce_sum(batch_ys * tf.log(y_conv), reduction_indices=[1])) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(batch_ys,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) predictions_correct, acc = sess.run([cross_entropy, y_conv, correct_prediction, accuracy]) print predictions_correct, acc

2

您收到的錯誤,因爲你試圖在correct_prediction運行eval()。該張量需要批量輸入(x和y)才能被評估。您可以將其更改爲:

print correct_prediction.eval(feed_dict={x: batch_xs, y: batch_ys}) 

但是,正如Benoit Steiner所提到的,您可以輕鬆將其拉入模型中。

在更一般的說明中,您並未在此處進行任何優化,但是您可能還沒有做過這方面的工作。就目前而言,它只會打印出糟糕的預測一段時間。 :)

0

首先你的x和y被註釋掉了,如果這存在於你的實際代碼中,那很可能是這個問題。

correct_prediction.eval()相當於tf.session.run(correct_prediction)(或者您的情況sess.run()),因此需要相同的語法*。所以它需要爲correct_prediction.eval(feed_dict={x: batch_xs, y: batch_ys})才能運行,不過要注意的是,這通常是RAM密集型的,並且可能導致系統掛起。由於ram的使用,將精度函數拉入模型可能是一個好主意。

我沒有看到一個優化函數來利用您的交叉熵,但我從來沒有嘗試過不使用一個,所以如果它的工作不解決它。但如果它最終拋出一個錯誤,你可能也想嘗試:

optimizer = optimizer = tf.train.AdamOptimizer().minimize(cross_entropy) 

並更換 'cross_entropy' 在

sess.run([cross_entropy, y_conv],feed_dict={x: batch_xs, y: batch_ys}) 

與 'optimizer'

https://pythonprogramming.net/tensorflow-neural-network-session-machine-learning-tutorial/

檢查準確性評估部分的腳本。