2017-06-12 45 views
0

我一直在這一整天工作,我不認爲別人會有所作爲!無法得到tf.train.shuffle_batch()正常工作

我有一個.png文件,從我做了> 400個拷貝[ I got to use images with different shapes, but for now I just want to get this starting ]

這裏我用跳躍到的圖像與標籤張量的代碼:基於什麼我

import tensorflow as tf 
import os 
import numpy 
batch_Size =20 
num_epochs = 100 
files = os.listdir("Test_PNG") 
files = ["Test_PNG/" + s for s in files] 
files = [os.path.abspath(s) for s in files ] 


def read_my_png_files(filename_queue): 
    reader = tf.WholeFileReader() 
    imgName,imgTensor = reader.read(filename_queue) 
    img = tf.image.decode_png(imgTensor,channels=0) 
    # Processing should be add 
    return img,imgName 

def inputPipeline(filenames, batch_Size, num_epochs= None): 
    filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs,shuffle =True) 
    img_file, label = read_my_png_files(filename_queue) 
    min_after_dequeue = 100 
    capacity = min_after_dequeue+3*batch_Size 
    img_batch,label_batch = tf.train.shuffle_batch([img_file,label],batch_size=batch_Size,enqueue_many=True, 
                allow_smaller_final_batch=True, capacity=capacity, 
                min_after_dequeue =min_after_dequeue, shapes=[w,h,d]) 
    return img_batch,label_batch 

images, Labels = inputPipeline(files,batch_Size,num_epochs) 

我知道我應該得到20倍的圖像張量和他們的標籤。 當我運行的代碼波紋管這裏就是我得到:

--------------------------------------------------------------------------- 
ValueError        Traceback (most recent call last) 
<ipython-input-3-08857195e465> in <module>() 
    34  return img_batch,label_batch 
    35 
---> 36 images, Labels = inputPipeline(files,batch_Size,num_epochs) 

<ipython-input-3-08857195e465> in inputPipeline(filenames, batch_Size, num_epochs) 
    31  img_batch,label_batch = tf.train.shuffle_batch([img_file,label],batch_size=batch_Size,enqueue_many=True, 
    32              allow_smaller_final_batch=True, capacity=capacity, 
---> 33              min_after_dequeue =min_after_dequeue, shapes=[w,h,d]) 
    34  return img_batch,label_batch 
    35 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\training\input.py in shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads, seed, enqueue_many, shapes, allow_smaller_final_batch, shared_name, name) 
    1212  allow_smaller_final_batch=allow_smaller_final_batch, 
    1213  shared_name=shared_name, 
-> 1214  name=name) 
    1215 
    1216 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\training\input.py in _shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads, seed, enqueue_many, shapes, allow_smaller_final_batch, shared_name, name) 
    767  queue = data_flow_ops.RandomShuffleQueue(
    768   capacity=capacity, min_after_dequeue=min_after_dequeue, seed=seed, 
--> 769   dtypes=types, shapes=shapes, shared_name=shared_name) 
    770  _enqueue(queue, tensor_list, num_threads, enqueue_many, keep_input) 
    771  full = (math_ops.cast(math_ops.maximum(0, queue.size() - min_after_dequeue), 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\ops\data_flow_ops.py in __init__(self, capacity, min_after_dequeue, dtypes, shapes, names, seed, shared_name, name) 
    626   shared_name=shared_name, name=name) 
    627 
--> 628  super(RandomShuffleQueue, self).__init__(dtypes, shapes, names, queue_ref) 
    629 
    630 

c:\users\engine\appdata\local\programs\python\python35\lib\site-packages\tensorflow\python\ops\data_flow_ops.py in __init__(self, dtypes, shapes, names, queue_ref) 
    151  if shapes is not None: 
    152  if len(shapes) != len(dtypes): 
--> 153   raise ValueError("Queue shapes must have the same length as dtypes") 
    154  self._shapes = [tensor_shape.TensorShape(s) for s in shapes] 
    155  else: 

ValueError: Queue shapes must have the same length as dtypes 

我宣佈型波紋管在tf.train.shuffle_batch功能使用,但我仍然有一個形狀誤差!

任何想法如何解決這個問題?

回答

1

你的問題是,無論是從

  • enqueue_many=True說法,
  • shapes參數,其中label尺寸是不存在的形狀。

所以我會試着用enqueue_many=Falseshapes=[[h, w, c], []])

事實上,如果你看一下shuffle_batch DOC:

如果enqueue_manyFalse,假設tensors來表示 一個例子。形狀爲[x, y, z]的輸入張量將作爲形狀爲[batch_size, x, y, z]的張量輸出 。

如果enqueue_manyTrue,假定tensors來表示 批次的例子,其中第一維是由例如, 索引和tensors所有成員應該在 第一尺寸相同的尺寸。如果輸入張量的形狀爲[*, x, y, z],則 輸出將具有形狀[batch_size, x, y, z]

但在你的代碼,看來你出列,只有一個單一的文件: img_file, label = read_my_png_files(filename_queue)和你直接將它傳遞給shuffle_batch功能: img_batch,label_batch = tf.train.shuffle_batch([img_file,label], ...) 所以*尺寸失蹤,TensorFlow期待的是第一維[img_file,label]是一些例子。

另外請注意,enqueue_manydequeue_many是獨立的;即

  • *:你排隊到隊列實例的數量,是獨立的
  • batch_size:從隊列中拉出新批大小。
+0

謝謝你回覆enqueue_many的默認值是false我已經設置它爲True,因爲批處理將具有batchSize形狀的時間png形狀?無論如何它沒有任何工作方式! – Engine

+0

你試過了嗎?我可以使用任何你有的PNG文件! – Engine

+0

非常感謝您的幫助,現在正在工作,但是我設置了一個問題來理解它背後的機制。 tf.train.shuffle_batch([img_file,label] ..)告訴批處理函數應該使用哪個隊列來獲取文件及其標籤,並且參數batchSize告訴函數應該排出多少元素,對吧? – Engine