2016-10-20 121 views
2

從tensorflow的MNIST tutorial複製和粘貼代碼的工作很好,導致準確度達到92%,如預期。Tensorflow feed_dict未學習

當我讀取MNIST數據爲CSV格式,並使用pd.DataFrame.values轉換爲np數組時,此過程會中斷。我得到了一個〜10%(不比隨機)好的精度。

下面是代碼(教程代碼工作得很好,我的CSV讀者學習不到):

工作 MNIST教程:

from tensorflow.examples.tutorials.mnist import input_data 
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) 


x = tf.placeholder(tf.float32, [None, 784]) 
W = tf.Variable(tf.zeros([784, 10])) 
b = tf.Variable(tf.zeros([10])) 
y = tf.nn.softmax(tf.matmul(x, W) + b) 
y_ = tf.placeholder(tf.float32, [None, 10]) 
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) 
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) 
init = tf.initialize_all_variables() 
sess = tf.Session() 
sess.run(init) 

for i in range(1000): 
    batch_xs, batch_ys = mnist.train.next_batch(100) 
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) 


correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) 

不工作(讀CSV和飼料NP陣列):

import pandas as pd 
from sklearn.cross_validation import train_test_split 
import numpy as np  

# read csv file 
MNIST = pd.read_csv("/data.csv") 

# pop label column and create training label array 
train_label = MNIST.pop("label") 

# converts from dataframe to np array 
MNIST=MNIST.values 

# convert train labels to one hots 
train_labels = pd.get_dummies(train_label) 
# make np array 
train_labels = train_labels.values 

x_train,x_test,y_train,y_test = train_test_split(MNIST,train_labels,test_size=0.2) 
# we now have features (x_train) and y values, separated into test and train 

# convert to dtype float 32 
x_train,x_test,y_train,y_test = np.array(x_train,dtype='float32'), np.array(x_test,dtype='float32'),np.array(y_train,dtype='float32'),np.array(y_test,dtype='float32') 



x = tf.placeholder(tf.float32, [None, 784]) 
W = tf.Variable(tf.zeros([784, 10])) 
b = tf.Variable(tf.zeros([10])) 
y = tf.nn.softmax(tf.matmul(x, W) + b) 
y_ = tf.placeholder(tf.float32, [None, 10]) 
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) 
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) 
init = tf.initialize_all_variables() 
sess = tf.Session() 
sess.run(init) 

def get_mini_batch(x,y): 
    # choose 100 random row values 
    rows=np.random.choice(x.shape[0], 100) 
    # return arrays of 100 random rows (for features and labels) 
    return x[rows], y[rows] 

# train 
for i in range(100): 
    # get mini batch 
    a,b=get_mini_batch(x_train,y_train) 
    # run train step, feeding arrays of 100 rows each time 
    sess.run(train_step, feed_dict={x: a, y_: b}) 

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
print(sess.run(accuracy, feed_dict={x: x_test, y_: y_test})) 

幫助將不勝感激。 (CSV文件here。)

回答

0

您是否嘗試過爲更多迭代進行訓練?我看到,原來的代碼是在1000次迭代

for i in range(1000): 

培訓鑑於該CSV代碼只培訓了100次迭代:

for i in range(100): 

如果這不是理由,這將是有益的,如果你也能分享您的CSV文件,比我們可以輕鬆測試您的代碼。

編輯:

我已經測試你的代碼,它似乎是由數值不穩定性在簡單cross_entropy計算引起(見本SO question)。通過下面的行替換您cross_entropy定義,你能解決這個問題:

cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(
    y, y_, name='xentropy'))) 

還通過可視化的返回cross_entropy,你會看到你的代碼將返回NaN,而與此代碼,你會得到真正的號碼。 ..

的完整的工作代碼還會打印出每次迭代的cross_entropy:

import pandas as pd 
from sklearn.cross_validation import train_test_split 
import numpy as np  

# read csv file 
MNIST = pd.read_csv("data.csv") 

# pop label column and create training label array 
train_label = MNIST.pop("label") 

# converts from dataframe to np array 
MNIST=MNIST.values 

# convert train labels to one hots 
train_labels = pd.get_dummies(train_label) 
# make np array 
train_labels = train_labels.values 

x_train,x_test,y_train,y_test = train_test_split(MNIST,train_labels,test_size=0.2) 
# we now have features (x_train) and y values, separated into test and train 

# convert to dtype float 32 
x_train,x_test,y_train,y_test = np.array(x_train,dtype='float32'), np.array(x_test,dtype='float32'),np.array(y_train,dtype='float32'),np.array(y_test,dtype='float32') 

x = tf.placeholder(tf.float32, [None, 784]) 
W = tf.Variable(tf.zeros([784, 10])) 
b = tf.Variable(tf.zeros([10])) 
y = tf.nn.softmax(tf.matmul(x, W) + b) 
y_ = tf.placeholder(tf.float32, [None, 10]) 
print y.get_shape() 
print y_.get_shape() 
cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(y, y_, name='xentropy'))) 
train_step = tf.train.GradientDescentOptimizer(0.0001).minimize(cross_entropy) 
init = tf.initialize_all_variables() 
sess = tf.Session() 
sess.run(init) 

def get_mini_batch(x,y): 
    # choose 100 random row values 
    rows=np.random.choice(x.shape[0], 100) 
    # return arrays of 100 random rows (for features and labels) 
    return x[rows], y[rows] 

# train 
for i in range(1000): 
    # get mini batch 
    a,b=get_mini_batch(x_train,y_train) 
    # run train step, feeding arrays of 100 rows each time 
    _, cost =sess.run([train_step,cross_entropy], feed_dict={x: a, y_: b}) 
    print cost 

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 
print(sess.run(accuracy, feed_dict={x: x_test, y_: y_test})) 

你仍然需要優化學習率和#iterations進一步,但與此設置你應該已經GE準確度達到70%。

+1

CSV鏈接上傳。而且,唉,沒有...... 1000次培訓交流會使我的代碼精確度達到10%。 –

+0

當我運行這個時,精度進一步下降。請問你用這個新的cross-tratropy有什麼準確性? –

+0

對不起,我在代碼中有一個不正確的減號,我現在有73%的準確率,我會把我的完整代碼放在文本中!請注意,您仍然可以使用學習率和迭代次數來提高準確性。 – Fematich

0

我非常確定,批次不應該是100個隨機行,而應該是100行之後,例如,0:99和100:199將是您的前兩個批次。試試這批代碼。檢查這kernel出來培訓Mn從csv在TF

epochs_completed = 0 
index_in_epoch = 0 
num_examples = train_images.shape[0] 

# serve data by batches 
def next_batch(batch_size): 

    global train_images 
    global train_labels 
    global index_in_epoch 
    global epochs_completed 

    start = index_in_epoch 
    index_in_epoch += batch_size 

    # when all trainig data have been already used, it is reorder randomly  
    if index_in_epoch > num_examples: 
     # finished epoch 
     epochs_completed += 1 
     # shuffle the data 
     perm = np.arange(num_examples) 
     np.random.shuffle(perm) 
     train_images = train_images[perm] 
     train_labels = train_labels[perm] 
     # start next epoch 
     start = 0 
     index_in_epoch = batch_size 
     assert batch_size <= num_examples 
    end = index_in_epoch 
    return train_images[start:end], train_labels[start:end]