2016-09-04 40 views
0

在此之前,我將輸入圖像轉換爲TFRecords文件。現在,我有我大部分來自教程收集和少許修改下面的方法:無法讀取TensorFlow上的數據

def read_and_decode(filename_queue): 
    reader = tf.TFRecordReader() 
    _, serialized_example = reader.read(filename_queue) 
    features = tf.parse_single_example(
     serialized_example, 
     # Defaults are not specified since both keys are required. 
     features={ 
      'image/encoded': tf.FixedLenFeature([], tf.string), 
      'image/class/label': tf.FixedLenFeature([], tf.int64), 
     }) 
    image = tf.decode_raw(features['image/encoded'], tf.uint8) 
    label = tf.cast(features['image/class/label'], tf.int32) 

    reshaped_image = tf.reshape(image,[size[0], size[1], 3]) 
    reshaped_image = tf.image.resize_images(reshaped_image, size[0], size[1], method = 0) 
    reshaped_image = tf.image.per_image_whitening(reshaped_image) 
    return reshaped_image, label 

def inputs(train, batch_size, num_epochs): 
    filename = os.path.join(FLAGS.train_dir, 
          TRAIN_FILE if train else VALIDATION_FILE) 

    filename_queue = tf.train.string_input_producer(
     [filename], num_epochs=num_epochs) 

    # Even when reading in multiple threads, share the filename 
    # queue. 
    image, label = read_and_decode(filename_queue) 

    # Shuffle the examples and collect them into batch_size batches. 
    # (Internally uses a RandomShuffleQueue.) 
    # We run this in two threads to avoid being a bottleneck. 
    images, sparse_labels = tf.train.shuffle_batch(
     [image, label], batch_size=batch_size, num_threads=2, 
     capacity=1000 + 3 * batch_size, 
     # Ensures a minimum amount of shuffling of examples. 
     min_after_dequeue=1000) 
    return images, sparse_labels 

但是,當我嘗試調用上的IPython/Jupyter批次,過程永遠不會結束(有似乎是一個循環)。我把這種方式:

batch_x, batch_y = inputs(True, 100,1) 
print batch_x.eval() 

回答

1

它看起來像你缺少tf.train.start_queue_runners()一個電話,開始驅動的輸入管道(後臺線程例如,有些是通過num_threads=2在調用tf.train.shuffle_batch()隱含的主題,並且tf.train.string_input_producer()也需要後臺線程)。下面的小變化應該疏通的事情:

batch_x, batch_y = inputs(True, 100,1) 
tf.initialize_all_variables.run() # Initializes variables. 
tf.initialize_local_variables.run() # Needed after TF version 0.10. 
tf.train.start_queue_runners()  # Starts the necessary background threads. 
print batch_x.eval() 
+0

謝謝你,現在我得到以下警告:ERROR:tensorflow:異常在QueueRunner:試圖使用未初始化值input_producer/limit_epochs /時代 \t [節點:input_producer/limit_epochs/CountUpTo = CountUpTo [T = DT_INT64,_class = [「loc:@ input_producer/limit_epochs/epochs」],limit = 1000,_device =「/ job:localhost/replica:0/task:0/cpu:0」] (input_producer/limit_epochs /時期)]]。你知道什麼可能導致它? – Kevin

+0

我用另一張丟失的樣板更新了問題。你需要調用'tf.initialize_all_variables.run()'(或'sess.run(tf.initialize_all_variables())')。如果這不起作用(取決於版本),您可能還需要添加'tf.initialize_local_variables()。run()'。 – mrry