2016-08-11 186 views
0

我使用keras作爲CNN,但問題是存在內存泄漏。錯誤是使用keras時出現內存錯誤

 [email protected]:~/12EC35005/MTP_Workspace/MTP$ python cnn_implement.py 
     Using Theano backend. 
     [INFO] compiling model... 
     Traceback (most recent call last): 
      File "cnn_implement.py", line 23, in <module> 
      model = CNNModel.build(width=150, height=150, depth=3) 
      File "/home/ms/anushreej/12EC35005/MTP_Workspace/MTP/cnn/networks/model_define.py", line 27, in build 
      model.add(Dense(depth*height*width)) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/models.py", line 146, in add 
      output_tensor = layer(self.outputs[0]) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/engine/topology.py", line 458, in __call__ 
      self.build(input_shapes[0]) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/layers/core.py", line 604, in build 
      name='{}_W'.format(self.name)) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/initializations.py", line 61, in glorot_uniform 
      return uniform(shape, s, name=name) 
      File "/home/ms/anushreej/anaconda3/lib/python3.5/site-packages/keras/initializations.py", line 32, in uniform 
      return K.variable(np.random.uniform(low=-scale, high=scale, size=shape), 
      File "mtrand.pyx", line 1255, in mtrand.RandomState.uniform (numpy/random/mtrand/mtrand.c:13575) 
      File "mtrand.pyx", line 220, in mtrand.cont2_array_sc (numpy/random/mtrand/mtrand.c:2902) 
     MemoryError 

現在我無法理解爲什麼會發生這種情況。我的訓練圖像非常小,尺寸爲150 * 150 * 3。

的代碼 - :

 # import the necessary packages 
     from keras.models import Sequential 
     from keras.layers.convolutional import Convolution2D 
     from keras.layers.core import Activation 
     from keras.layers.core import Flatten 
     from keras.layers.core import Dense 

     class CNNModel: 
      @staticmethod 
      def build(width, height, depth): 
      # initialize the model 
      model = Sequential() 
      # first set of CONV => RELU 
      model.add(Convolution2D(50, 5, 5, border_mode="same", batch_input_shape=(None, depth, height, width))) 
      model.add(Activation("relu")) 

      # second set of CONV => RELU 
      # model.add(Convolution2D(50, 5, 5, border_mode="same")) 
      # model.add(Activation("relu")) 

      # third set of CONV => RELU 
      # model.add(Convolution2D(50, 5, 5, border_mode="same")) 
      # model.add(Activation("relu")) 

      model.add(Flatten()) 

      model.add(Dense(depth*height*width)) 

      # if weightsPath is not None: 
      # model.load_weights(weightsPath) 

      return model 
+0

你怎麼知道有內存泄漏?而不是另一個問題? –

回答

0

我面臨同樣的問題,我認爲這個問題只是扁平化層之前的數據點都超過你的系統可以處理(我試過在差分方程等等一個高ram工作,並與更少的RAM給了這個錯誤)。只需添加更多的CNN圖層來減小尺寸,然後添加一個平坦的圖層即可。

這給了我和錯誤:

model = Sequential() model.add(Convolution2D(32, 3, 3,border_mode='same',input_shape=(1, 96, 96),activation='relu')) model.add(Convolution2D(64, 3, 3,border_mode='same',activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(1000,activation='relu')) model.add(Dense(97,activation='softmax'))

這沒有給一個錯誤

model = Sequential() model.add(Convolution2D(32, 3, 3,border_mode='same',input_shape=(1, 96, 96),activation='relu')) model.add(Convolution2D(64, 3, 3,border_mode='same',activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Convolution2D(64, 3, 3,border_mode='same',activation='relu')) model.add(Convolution2D(128, 3, 3,border_mode='same',activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(1000,activation='relu')) model.add(Dense(97,activation='softmax')

希望它能幫助。

相關問題