2016-09-04 163 views
2

我正在嘗試使用TensorFlow乾淨的方式(tf.train.shuffle_batch)處理我的輸入數據,這些代碼中的大部分都是從tutorials中收集的,並稍加修改,如decode_jpeg函數。tf.train.shuffle_batch not working for

size = 32,32 
classes = 43 
train_size = 12760 
batch_size = 100 
max_steps = 10000 

def read_and_decode(filename_queue): 
    reader = tf.TFRecordReader() 
    _, serialized_example = reader.read(filename_queue) 
    features = tf.parse_single_example(
     serialized_example, 
     # Defaults are not specified since both keys are required. 
     features={ 
      'image/encoded': tf.FixedLenFeature([], tf.string), 
      'image/class/label': tf.FixedLenFeature([], tf.int64), 
      'image/height': tf.FixedLenFeature([], tf.int64), 
      'image/width': tf.FixedLenFeature([], tf.int64), 
     }) 
    label = tf.cast(features['image/class/label'], tf.int32) 
    reshaped_image = tf.image.decode_jpeg(features['image/encoded']) 
    reshaped_image = tf.image.resize_images(reshaped_image, size[0], size[1], method = 0) 
    reshaped_image = tf.image.per_image_whitening(reshaped_image) 
    return reshaped_image, label 

def inputs(train, batch_size, num_epochs): 
    subset = "train" 
    tf_record_pattern = os.path.join(FLAGS.train_dir + '/GTSRB', '%s-*' % subset) 
    data_files = tf.gfile.Glob(tf_record_pattern) 
    filename_queue = tf.train.string_input_producer(
     data_files, num_epochs=num_epochs) 

    # Even when reading in multiple threads, share the filename 
    # queue. 
    image, label = read_and_decode(filename_queue) 

    # Shuffle the examples and collect them into batch_size batches. 
    # (Internally uses a RandomShuffleQueue.) 
    # We run this in two threads to avoid being a bottleneck. 
    images, sparse_labels = tf.train.shuffle_batch(
     [image, label], batch_size=batch_size, num_threads=2, 
     capacity=1000 + 3 * batch_size, 
     # Ensures a minimum amount of shuffling of examples. 
     min_after_dequeue=1000) 
    return images, sparse_labels 

當我嘗試運行

batch_x, batch_y = inputs(True, 100,100) 

我得到以下錯誤:

--------------------------------------------------------------------------- 
ValueError        Traceback (most recent call last) 
<ipython-input-6-543290a0c903> in <module>() 
----> 1 batch_x, batch_y = inputs(True, 100,100) 

<ipython-input-5-a8c07c7fc263> in inputs(train, batch_size, num_epochs) 
    73   capacity=1000 + 3 * batch_size, 
    74   # Ensures a minimum amount of shuffling of examples. 
---> 75   min_after_dequeue=1000) 
    76  #return image, label 
    77  return images, sparse_labels 

/Users/Kevin/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/input.pyc in shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads, seed, enqueue_many, shapes, allow_smaller_final_batch, shared_name, name) 
    800  queue = data_flow_ops.RandomShuffleQueue(
    801   capacity=capacity, min_after_dequeue=min_after_dequeue, seed=seed, 
--> 802   dtypes=types, shapes=shapes, shared_name=shared_name) 
    803  _enqueue(queue, tensor_list, num_threads, enqueue_many) 
    804  full = (math_ops.cast(math_ops.maximum(0, queue.size() - min_after_dequeue), 

/Users/Kevin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.pyc in __init__(self, capacity, min_after_dequeue, dtypes, shapes, names, seed, shared_name, name) 
    580  """ 
    581  dtypes = _as_type_list(dtypes) 
--> 582  shapes = _as_shape_list(shapes, dtypes) 
    583  names = _as_name_list(names, dtypes) 
    584  # If shared_name is provided and an op seed was not provided, we must ensure 

/Users/Kevin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.pyc in _as_shape_list(shapes, dtypes, unknown_dim_allowed, unknown_rank_allowed) 
    70 if not unknown_dim_allowed: 
    71  if any([not shape.is_fully_defined() for shape in shapes]): 
---> 72  raise ValueError("All shapes must be fully defined: %s" % shapes) 
    73 if not unknown_rank_allowed: 
    74  if any([shape.dims is None for shape in shapes]): 

ValueError: All shapes must be fully defined: [TensorShape([Dimension(32), Dimension(32), Dimension(None)]), TensorShape([])] 

我不知道是什麼原因造成這個錯誤,我想這事做與我處理我的圖像的方式,因爲它表明他們沒有尺寸時,他們應該有3個通道(RGB)。

回答

3

batching methods in TensorFlowtf.train.batch()tf.train.batch_join()tf.train.shuffle_batch(),和tf.train.shuffle_batch_join())要求的批次的每個元件具有完全相同的形狀*,以便它們可以被包裝成緻密張量。在您的代碼中,您傳遞給tf.train.shuffle_batch()image張量的第三維似乎具有未知大小。這對應於每個圖像中的通道數目,對於單色圖像是1,對於彩色圖像是3,或對於具有alpha通道的彩色圖像是4。如果您通過了明確的channels=N(其中N爲1,3或4),這將爲TensorFlow提供有關圖像張量形狀的足夠信息以進行處理。


  *有一個例外:當你通過dynamic_pad=Truetf.train.batch()tf.train.batch_join()元素可以有不同的形狀,但它們必須具有相同的等級。通常,這僅用於連續數據,而不是圖像數據(在圖像邊緣會出現不良行爲)。

+0

我將大小添加到我的代碼,它是一個靜態值32,32 – Kevin

+0

啊,看起來它是未知的通道數。查看更新的答案。 – mrry

+0

謝謝,把頻道放在decode_jpeg上解決了我的問題! – Kevin