1
我確實有一個圖像的名稱和標籤作爲列表,我想獲得一批64個圖像/標籤。我可以以正確的方式獲取圖像,但對於標籤,其尺寸是(64,8126)。每列有64次相同的元素。行由8126個原始標籤值組成,不會混亂。如何處理輸入管道中的標籤?
我瞭解每個圖像tf.train.shuffle_batch考慮8126元素標籤矢量的問題。但是,我將如何通過每個圖像只有一個元素?
def _get_images(shuffle=True):
"""Gets the images and labels as a batch"""
#get image and label list
_img_names,_img_class = _get_list() #list of image names and labels
filename_queue = tf.train.string_input_producer(_img_names)
#reader
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
#decode jpeg
image_original = tf.image.decode_jpeg(image_file)
label_original = tf.convert_to_tensor(_img_class,dtype=tf.int32)
#print label_original
#image preprocessing
image = tf.image.resize_images(image_original, [224,224])
float_image = tf.cast(image,dtype=tf.float32)
float_image = tf.image.per_image_standardization(image)
#set the shape
float_image.set_shape((224, 224, 3))
#label_original.set_shape([8126]) #<<<<<=========== causes (64,8126) dimension label without shuffle
#parameters for shuffle
batch_size = 64
num_preprocess_threads = 16
num_examples_per_epoch = 8000
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(num_examples_per_epoch *
min_fraction_of_examples_in_queue)
if shuffle:
images_batch, label_batch = tf.train.shuffle_batch(
[float_image,label_original],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
else:
images_batch, label_original = tf.train.batch(
[float_image,_img_class],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size)
return images_batch,label_batch
Greatttt!謝謝..非常感謝! –