2016-12-04 30 views
1

當我試圖在圖像列表中循環運行Google的Inception模型時,在大約100張左右的圖像後出現以下問題。它似乎已經耗盡內存。我正在運行一個CPU。有其他人遇到過這個問題嗎?Google Inception tensorflow.python.framework.errors.ResourceExhaustedError

Traceback (most recent call last): 
    File "clean_dataset.py", line 33, in <module> 
    description, score = inception.run_inference_on_image(f.read()) 
    File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 178, in run_inference_on_image 
    node_lookup = NodeLookup() 
    File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 83, in __init__ 
    self.node_lookup = self.load(label_lookup_path, uid_lookup_path) 
    File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 112, in load 
    proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines() 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 110, in readlines 
    self._prereadline_check() 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 72, in _prereadline_check 
    compat.as_bytes(self.__name), 1024 * 512, status) 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 24, in __exit__ 
    self.gen.next() 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status 
    pywrap_tensorflow.TF_GetCode(status)) 
tensorflow.python.framework.errors.ResourceExhaustedError: /tmp/imagenet/imagenet_2012_challenge_label_map_proto.pbtxt 


real 6m32.403s 
user 7m8.210s 
sys  1m36.114s 

https://github.com/tensorflow/models/tree/master/inception

回答

3

的問題是你不能簡單地導入原始「classify_image.py」(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py)在自己的代碼,尤其是當你把它變成一個巨大的循環,在數千張圖片」分類批處理模式'。

看原來代碼在這裏:

with tf.Session() as sess: 
# Some useful tensors: 
# 'softmax:0': A tensor containing the normalized prediction across 
# 1000 labels. 
# 'pool_3:0': A tensor containing the next-to-last layer containing 2048 
# float description of the image. 
# 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG 
# encoding of the image. 
# Runs the softmax tensor by feeding the image_data as input to the graph. 
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0') 
predictions = sess.run(softmax_tensor, 
         {'DecodeJpeg/contents:0': image_data}) 
predictions = np.squeeze(predictions) 

# Creates node ID --> English string lookup. 
node_lookup = NodeLookup() 

top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1] 
for node_id in top_k: 
    human_string = node_lookup.id_to_string(node_id) 
    score = predictions[node_id] 
    print('%s (score = %.5f)' % (human_string, score)) 

從上面可以看出,對於每一個分類的任務是生成類「NodeLookup」,從文件下方加載的新實例:

  • label_lookup = 「imagenet_2012_challenge_label_map_proto.pbtxt」
  • uid_lookup_path = 「imagenet_synset_to_human_label_map.txt」

所以這個實例真的很大,然後在你的代碼循環中它會產生數百個這個類的實例,這會導致'tensorflow.python.framework.errors.ResourceExhaustedError'。

我建議搭便車的是寫一個新腳本,並從'classify_image.py'修改這些類和函數,並避免爲每個循環實例化NodeLookup類,只實例化一次並使用它在循環中。類似這樣的:

with tf.Session() as sess: 
     softmax_tensor = sess.graph.get_tensor_by_name('softmax:0') 
     print 'Making classifications:' 

     # Creates node ID --> English string lookup. 
     node_lookup = NodeLookup(label_lookup_path=self.Model_Save_Path + self.label_lookup, 
           uid_lookup_path=self.Model_Save_Path + self.uid_lookup_path) 

     current_counter = 1 
     for (tensor_image, image) in self.tensor_files: 
      print 'On ' + str(current_counter) 

      predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': tensor_image}) 
      predictions = np.squeeze(predictions) 

      top_k = predictions.argsort()[-int(self.filter_level):][::-1] 

      for node_id in top_k: 
       human_string = node_lookup.id_to_string(node_id) 
       score = predictions[node_id] 
相關問題