2017-04-27 26 views
1

我使用TensorFlowkeras.preprocessing.image.ImageDataGenerator()生成合成數據,用於平衡訓練前所有類的樣本大小。我得到了如下錯誤信息:Tensorflow:在使用tf.image和keras進行數據增加時提高ValueError(「GraphDef不能大於2GB」)。

Traceback (most recent call last): 
    File "data_augmentation.py", line 100, in <module> 
    run(fish_class_aug_fold[i]) 
    File "data_augmentation.py", line 93, in run 
    data_augmentation(img_handle, fish_class, aug_fold) 
    File "data_augmentation.py", line 52, in data_augmentation 
    img = session.run(img) 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 778, in run 
    run_metadata_ptr) 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 982, in _run 
feed_dict_string, options, run_metadata) 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run 
    target_list, options, run_metadata) 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call 
    return fn(*args) 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1017, in _run_fn 
self._extend_graph() 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1061, in _extend_graph 
add_shapes=self._add_shapes) 
    File "/Users/local/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2154, in _as_graph_def 
    raise ValueError("GraphDef cannot be larger than 2GB.") 
ValueError: GraphDef cannot be larger than 2GB. 

這裏是我的代碼:

def data_augmentation(img_handle, fish_class, nb_fold): 
""" 
This function is to generate synthetic pics for each class 
parameters: 
img_handle: a path for each input img 
class: name of each class in this problem 
nb_fold: an integer which indicates the number of folds that should run for each class 
for generating the same number of images for each class. 
""" 

img = cv2.imread(img_handle) 
# randomly adjust the hue of the img 
img = tf.image.random_hue(img, max_delta=0.3) 

# randomly adjust the contrust 
img = tf.image.random_contrast(img,lower=0.3, upper=1.0) 

# randomly adjust the brightness 
img = tf.image.random_brightness(img, max_delta=0.2) 

# randomly adjust the saturation 
img = tf.image.random_saturation(img, lower=0.0, upper=2.0) 

with tf.Session() as session: 
    # this output is np.ndarray 
    img = session.run(img) 

datagen = ImageDataGenerator(
    rotation_range=45, 
    width_shift_range=0.2, 
    height_shift_range=0.2, 
    rescale = 1./255, 
    shear_range=0.2, 
    zoom_range=0.2, 
    horizontal_flip=True, 
    fill_mode='nearest') 

x = img.reshape((1,) + img.shape) # this is a Numpy array with shape (1, 3, height, width) 

i = 0 
for batch in datagen.flow(x, batch_size=1, save_to_dir = data_dir+class, \ 
          save_prefix=class, save_format='jpg'): 
    i += 1 
    if i > nb_fold-1: 
     break 

我的想法是,通過使用「tf.image」功能,隨意改變輸入圖像,並使用tf.image輸出作爲keras.preprocessing.image.ImageDataGenerator()的輸入以在訓練之前生成合成圖像。我想問題來自session.run(img)。 我不明白爲什麼會發生,以及如何解決它。

有什麼想法?

非常感謝!

+0

圖像的尺寸是多少?可能是他們太大了。 –

+0

@RobertValencia 2D圖像,其中大部分是1280 * 720.使用tf.image太大了嗎? – Jundong

+0

該錯誤表明它與給定的代碼無關,而是與正在加載的GraphDef相關。你如何進入程序?https://www.tensorflow.org/extend/tool_developers/#graphdef –

回答

0

1280 x 720可能是太大。在我使用類似尺寸進行圖像/視頻識別之前,我遇到了同樣的問題。嘗試縮小你的形象下降的4一個因素,那麼再試

columns = 1280/4 
rows = 720/4 

img = cv2.imread(img_handle) 
img = cv2.resize(img, (columns, rows)) 
# add the rest of your code here 

此外,請嘗試使用不同的默認圖每個會話,以防止擴大,超出圖2 GB限制:

with tf.Session() as session, tf.Graph().as_default(): 
    img = session.run(img) 

最後,您可能還有興趣使用Tenso用於可視化圖形的rBoardhttps://www.tensorflow.org/get_started/graph_viz

+0

我當然需要在訓練前將圖像調整爲較小的尺寸。但是,爲什麼圖像大小會成爲這個session.run()函數的問題? – Jundong

+0

我會嘗試在生成新圖像的同時調整圖像大小。謝謝! – Jundong

+0

何時發生錯誤? –

相關問題