2017-06-29 48 views
0

我知道這個錯誤是經常性的,我明白什麼能導致它。 例如,運行這個模型的150×150 163個圖像給我的錯誤(但它不是很清楚,我爲什麼設置的batch_size Keras似乎仍然試圖在同一時間分配所有圖像的GPU):GridSearch在Keras + TensorFlow導致資源枯竭

model = Sequential() 
    model.add(Conv2D(64, kernel_size=(6, 6), activation='relu', input_shape=input_shape, padding='same', name='b1_conv')) 
    model.add(MaxPooling2D(pool_size=(2, 2), name='b1_poll')) 
    model.add(Conv2D(128, kernel_size=(6, 6), activation='relu', padding='same', name='b2_conv')) 
    model.add(MaxPooling2D(pool_size=(2, 2), name='b2_pool')) 
    model.add(Conv2D(256, kernel_size=(6, 6), activation='relu', padding='same', name='b3_conv')) 
    model.add(MaxPooling2D(pool_size=(2, 2), name='b3_pool')) 
    model.add(Flatten()) 
    model.add(Dense(500, activation='relu', name='fc1')) 
    model.add(Dropout(0.5)) 
    model.add(Dense(500, activation='relu', name='fc2')) 
    model.add(Dropout(0.5)) 
    model.add(Dense(n_targets, activation='softmax', name='prediction')) 
    model.compile(optimizer=optim, loss='categorical_crossentropy', metrics=['accuracy']) 

鑑於此,我將圖像大小縮小至30x30(導致精度下降,如預期的那樣)。但是,在此模型中運行網格搜索資源耗盡。

model = KerasClassifier(build_fn=create_model, verbose=0) 

# grid initial weight, batch size and optimizer 
sgd = optimizers.SGD(lr=0.0005) 
rms = optimizers.RMSprop(lr=0.0005) 
adag = optimizers.Adagrad(lr=0.0005) 
adad = optimizers.Adadelta(lr=0.0005) 
adam = optimizers.Adam(lr=0.0005) 
adamm = optimizers.Adamax(lr=0.0005) 
nadam = optimizers.Nadam(lr=0.0005) 

optimizers = [sgd, rms, adag, adad, adam, adamm, nadam] 
init = ['glorot_uniform', 'normal', 'uniform', 'he_normal'] 
batches = [32, 64, 128] 
param_grid = dict(optim=optimizers, batch_size=batches, init=init) 
grid = GridSearchCV(estimator=model, param_grid=param_grid) 
grid_result = grid.fit(X_train, y_train) 

# summarize results 
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) 

我不知道是否有可能通過網格搜索中使用的每種組合之前「乾淨」的東西(不知道我說清楚了,這是所有新的給我)。

編輯

使用fit_generator也給了我同樣的錯誤:

def generator(features, labels, batch_size): 
    # Create empty arrays to contain batch of features and labels# 
    batch_features = np.zeros((batch_size, size, size, 1)) 
    batch_labels = np.zeros((batch_size, n_targets)) 
    while True: 
     for i in range(batch_size): 
      # choose random index in features 
      index = np.random.choice(len(features),1) 
      batch_features[i] = features[index] 
      batch_labels[i] = labels[index] 
     yield batch_features, batch_labels 

sgd = optimizers.SGD(lr=0.0005) 
rms = optimizers.RMSprop(lr=0.0005) 
adag = optimizers.Adagrad(lr=0.0005) 
adad = optimizers.Adadelta(lr=0.0005) 
adam = optimizers.Adam(lr=0.0005) 
adamm = optimizers.Adamax(lr=0.0005) 
nadam = optimizers.Nadam(lr=0.0005) 

optim = [rms, adag, adad, adam, adamm, nadam] 
init = ['normal', 'uniform', 'he_normal'] 

combinations = [(a, b) for a in optim for b in init] 
for combination in combinations: 
    init = combination[1] 
    optim = combination[0] 
    model = create_model(init=init, optim=optim) 
    model.fit_generator(generator(X_train, y_train, batch_size=32), 
         steps_per_epoch=X_train.shape[0] // 32, 
         epochs=100, verbose=0, validation_data=(X_test, y_test)) 
    scores = model.model.evaluate(X_test, y_test, verbose=0) 
    print("%s: %.2f%% Model %s %s" % (model.model.metrics_names[1], scores[1]*100, optim, init)) 

回答

2

你應該發電機+ yield工作,他們從內存丟棄他們已經使用的數據。看看我的answer到類似的問題。

+0

我如何在GridSearch中使用它? – pceccon

+0

GridSearch似乎沒有采用生成器,但可以使用for循環模擬GridSearch並保存模型。交叉驗證甚至在每個循環之後洗牌您的數據集。一個簡單的例子就是這樣的:https://gist.github.com/lfcj/c02980dbf8c390cd470e840b460a418f –

+0

這個解決方案在運行一些搜索之後給了我ResourceExhaustedError。 – pceccon

1

您有K作爲tensorflow後端培訓/評估

K.clear_session() 

的每次運行後清除tensorflow會議。

+0

已經這樣做了,並且比ValueError:Tensor(「Variable_12:0」,shape =(64,),dtype = float32_ref)的錯誤更改必須與Tensor(「rho_4/read:0」 ,shape =(),dtype = float32)。' – pceccon

+0

我是否錯過了正確的方法來使用它?我在循環的最後一行之後放了K.clear_session()。 – pceccon

+0

@pceccon我認爲原因是你通過所有的網格搜索引用了相同的優化器。請記住,優化器只是符號圖的一部分,因此您必須以與模型相同的方式重新創建優化器。 –