2016-06-27 52 views
0

我已經對Tensorflow(MNIST)進行了學習,並且已經在.ckpt中保存了權重。 現在我想測試我的神經網絡在這個權重,與相同的圖像翻譯爲幾個像素的權利和底部。 加載重量很好,但是當我打印eval時,Tensorflow顯示的結果總是相同的(測試爲0.9630),無論翻譯大約是1還是14px。 這裏是我的哪個打印的eval函數代碼:在TensorFlow中將圖像右移

def eval_translation(sess, eval_correct, images_pl, labels_pl, dataset): 
    print('Test Data Eval:') 
    for i in range(28): 
     true_count = 0 # Counts the number of correct predictions. 
     steps_per_epoch = dataset.num_examples // FLAGS.batch_size 
     nb_exemples = steps_per_epoch * FLAGS.batch_size 
     for step in xrange(steps_per_epoch): 
      images_feed, labels_feed = dataset.next_batch(FLAGS.batch_size) 
      feed_dict = {images_pl: translate_right(images_feed, i), labels_pl: labels_feed} 
      true_count += sess.run(eval_correct, feed_dict=feed_dict) 
     precision = true_count/nb_exemples 
     print('Translation: %d Num examples: %d Num correct: %d Precision @ 1: %0.04f' % (i, nb_exemples, true_count, precision)) 

這是與我加載DATAS和與我打印測試結果的功能。 這裏是我的翻譯功能:

def translate_right(images, dev): 
    for i in range(len(images)): 
     for j in range(len(images[i])): 
      images[i][j] = np.roll(images[i][j], dev) 
    return images 

我打電話來代替這種學習功能只是初始化所有的變量之後:

with tf.Graph().as_default(): 
    # Generate placeholders for the images and labels. 
    images_placeholder, labels_placeholder = placeholder_inputs(FLAGS.batch_size) 

    # Build a Graph that computes predictions from the inference model. 
    weights, logits = mnist.inference(images_placeholder, neurons) 

    # Add to the Graph the Ops for loss calculation. 
    loss = mnist.loss(logits, labels_placeholder) 

    # Add to the Graph the Ops that calculate and apply gradients. 
    train_op = mnist.training(loss, learning_rate) 

    # Add the Op to compare the logits to the labels during evaluation. 
    eval_correct = mnist.evaluation(logits, labels_placeholder) 

    # Build the summary operation based on the TF collection of Summaries. 
    summary_op = tf.merge_all_summaries() 

    # Create a saver for writing training checkpoints. 
    save = {} 
    for i in range(len(weights)): 
     save['weights' + str(i)] = weights[i] 
    saver = tf.train.Saver(save) 

    # Create a session for running Ops on the Graph. 
    sess = tf.Session() 
    init = tf.initialize_all_variables() 
    sess.run(init) 

    # load weights 
    saver.restore(sess, restore_path) 

    # Instantiate a SummaryWriter to output summaries and the Graph. 
    summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph) 

    temps_total = time.time() 

    eval_translation(sess, eval_correct, images_placeholder, labels_placeholder, dataset.test) 

我不知道什麼是錯我的代碼,爲什麼Tensorflow似乎忽略了我的圖像。 有人可以幫助我嗎? 謝謝!

回答

0

您的功能translate_right不起作用,因爲images[i, j]只是一個像素(如果您有灰度圖像,則包含1個值)。

你應該使用參數的np.rollaxis

def translate_right(images, dev): 
    return np.roll(images, dev, axis=1) 
+0

謝謝!我認爲我的翻譯工作(我已經顯示圖像並看到了翻譯),但是使用axis arg它運行良好,並且Tensorflow打印更正值! – Liam