我使用TensorFlow
和keras.preprocessing.image.ImageDataGenerator()
生成合成數據,用於平衡訓練前所有類的樣本大小。我得到了如下錯誤信息:Tensorflow:在使用tf.image和keras進行數據增加時提高ValueError(「GraphDef不能大於2GB」)。
Traceback (most recent call last):
File "data_augmentation.py", line 100, in <module>
run(fish_class_aug_fold[i])
File "data_augmentation.py", line 93, in run
data_augmentation(img_handle, fish_class, aug_fold)
File "data_augmentation.py", line 52, in data_augmentation
img = session.run(img)
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call
return fn(*args)
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1017, in _run_fn
self._extend_graph()
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1061, in _extend_graph
add_shapes=self._add_shapes)
File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in _as_graph_def
raise ValueError("GraphDef cannot be larger than 2GB.")
ValueError: GraphDef cannot be larger than 2GB.
這裏是我的代碼:
def data_augmentation(img_handle, fish_class, nb_fold):
"""
This function is to generate synthetic pics for each class
parameters:
img_handle: a path for each input img
class: name of each class in this problem
nb_fold: an integer which indicates the number of folds that should run for each class
for generating the same number of images for each class.
"""
img = cv2.imread(img_handle)
# randomly adjust the hue of the img
img = tf.image.random_hue(img, max_delta=0.3)
# randomly adjust the contrust
img = tf.image.random_contrast(img,lower=0.3, upper=1.0)
# randomly adjust the brightness
img = tf.image.random_brightness(img, max_delta=0.2)
# randomly adjust the saturation
img = tf.image.random_saturation(img, lower=0.0, upper=2.0)
with tf.Session() as session:
# this output is np.ndarray
img = session.run(img)
datagen = ImageDataGenerator(
rotation_range=45,
width_shift_range=0.2,
height_shift_range=0.2,
rescale = 1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
x = img.reshape((1,) + img.shape) # this is a Numpy array with shape (1, 3, height, width)
i = 0
for batch in datagen.flow(x, batch_size=1, save_to_dir = data_dir+class, \
save_prefix=class, save_format='jpg'):
i += 1
if i > nb_fold-1:
break
我的想法是,通過使用「tf.image」功能,隨意改變輸入圖像,並使用tf.image
輸出作爲keras.preprocessing.image.ImageDataGenerator()
的輸入以在訓練之前生成合成圖像。我想問題來自session.run(img)
。 我不明白爲什麼會發生,以及如何解決它。
有什麼想法?
非常感謝!
圖像的尺寸是多少?可能是他們太大了。 –
@RobertValencia 2D圖像,其中大部分是1280 * 720.使用tf.image太大了嗎? – Jundong
該錯誤表明它與給定的代碼無關,而是與正在加載的GraphDef相關。你如何進入程序?https://www.tensorflow.org/extend/tool_developers/#graphdef –