2017-05-30 45 views
1

我有一個名爲(bestmodel.hdf5)的預定義文件,它是使用Keras庫(python)和theano創建的。使用以下代碼的訓練模型。如何使用wights預定義/訓練(hdf5)文件來預測一類新的eeg數據?

# set parameters 
batch_size = 1280 
nb_epoch = 3000 #6000 
l1_decay=0.00 
l2_decay=0 # .5 
# 0.01 0.06 
sigma=0.005 
in_drop_rate = .2 
drop_rate = .5 

print (tr_X.shape[1]) 
# set network layout 
model = Sequential() 
model.add(Dense(2184, input_shape=(tr_X.shape[1],) 
       , init='he_normal', W_regularizer=l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(in_drop_rate)) 


model.add(Dense(1310, init='he_normal', W_regularizer=l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(drop_rate)) 

model.add(Dense(786, init='he_normal', W_regularizer=l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(drop_rate)) 

model.add(Dense(472, init='he_normal', W_regularizer=l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(drop_rate)) 


model.add(Dense(4, W_regularizer=l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(Activation('softmax')) 

# Callbacks 
model_checkpoint = ModelCheckpoint('best_model.hdf5', monitor='val_loss', save_best_only=True) 
early = EarlyStopping(monitor='val_loss', patience=600, verbose=0) 

# fit and evaluate the model 
model.compile(loss='categorical_crossentropy', 
       optimizer=Adam(lr=0.001))#SGD(lr=0.0019, momentum=0.9, decay=0.0, nesterov=True)) 
history = model.fit(tr_X, tr_y, batch_size=batch_size, 
        nb_epoch=nb_epoch, verbose=0, callbacks=[early, model_checkpoint], 
        validation_data=(va_X, va_y)) 
model.load_weights('best_model.hdf5') 
tr_pr = model.predict(tr_X, batch_size=batch_size, verbose=0) 

然而,測試一個真實數據(形式實驗),我有一個不同的尺寸作爲輸入(例如:代替2184,我552)

因此,讀取HDF5重量文件和用它來預測數據的類別。我寫道:

# set parameters 
batch_size = 4 
l1_decay=0.00 
l2_decay=0 # .5 
# 0.01 0.06 
sigma=0.005 
in_drop_rate = .2 
drop_rate = .5 

# set network layout 
model = Sequential() 
model.add(Dense(552, input_shape=(552,) 
       , init='he_normal', W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(in_drop_rate)) 


model.add(Dense(331, init='he_normal', W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(drop_rate)) 

model.add(Dense(189, init='he_normal', W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(drop_rate)) 

model.add(Dense(119, init='he_normal', W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(GaussianNoise(sigma)) 
model.add(Activation('relu')) 
model.add(BatchNormalization()) 
model.add(Dropout(drop_rate)) 

model.add(Dense(4, W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 
model.add(Activation('softmax')) 


model.load_weights('best_model.hdf5') 
te_pr = model.predict(X, batch_size=batch_size, verbose=0) 

當我運行代碼,我得到了以下異常:在理解個問題

C:\Users\M\Desktop\Dr Abeer Folder\Emotion Project_code and dataset\End User\Experiment_Calculation.py:106: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(119, kernel_regularizer=<keras.reg..., kernel_initializer="he_normal")` 

model.add(Dense(119, init='he_normal', W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 


C:\Users\M\Desktop\Dr Abeer Folder\Emotion Project_code and dataset\End User\Experiment_Calculation.py:112: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(4, kernel_regularizer=<keras.reg...)` 

Traceback (most recent call last): 

File "C:\Users\M\Desktop\Dr Abeer Folder\Emotion Project_code and dataset\End User\main2.py", line 88, in BrowseFileHandler 

expcal.calclate_Experiment() 

File "C:\Users\M\Desktop\Dr Abeer Folder\Emotion Project_code and dataset\End User\Experiment_Calculation.py", line 66, in calclate_Experiment 

predictions = DNN(X) 

File "C:\Users\M\Desktop\Dr Abeer Folder\Emotion Project_code and dataset\End User\Experiment_Calculation.py", line 117, in DNN 

te_pr = model.predict(X, batch_size=batch_size, verbose=0) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\keras\models.py", line 902, in predict 

return self.model.predict(x, batch_size=batch_size, verbose=verbose) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\keras\engine\training.py", line 1585, in predict 

batch_size=batch_size, verbose=verbose) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\keras\engine\training.py", line 1212, in _predict_loop 

batch_outs = f(ins_batch) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\keras\backend\theano_backend.py", line 1158, in __call__ 

return self.function(*inputs) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\theano\compile\function_module.py", line 898, in __call__ 

storage_map=getattr(self.fn, 'storage_map', None)) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\theano\gof\link.py", line 325, in raise_with_op 

reraise(exc_type, exc_value, exc_trace) 

File "C:\Users\M\AppData\Roaming\Python\Python27\site-packages\theano\compile\function_module.py", line 884, in __call__ 

self.fn() if output_subset is None else\ 

ValueError: dimension mismatch in args to gemm (4,552)x(2184,2184)->(4,2184) 

Apply node that caused the error: GpuDot22(GpuFromHost.0, dense_1/kernel) 

Toposort index: 28 

Inputs types: [CudaNdarrayType(float32, matrix), CudaNdarrayType(float32, matrix)] 

Inputs shapes: [(4, 552), (2184, 2184)] 

Inputs strides: [(552, 1), (2184, 1)] 

Inputs values: ['not shown', 'not shown'] 

Outputs clients: [[GpuElemwise{Add}[(0, 0)](GpuDot22.0, 
GpuDimShuffle{x,0}.0), GpuElemwise{Composite{(i0 + i1 + (i2 * i3))}}[(0, 3)] 
(GpuDot22.0, GpuDimShuffle{x,0}.0, CudaNdarrayConstant{[[ 0.005]]}, GpuReshape{2}.0)]] 



HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. 

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node. 


model.add(Dense(4, W_regularizer=regularizers.l1_l2(l1=l1_decay, l2=l2_decay))) 

任何人都可以,請幫助。我是該地區的新人,特別是Keras和theano的使用。我該如何解決它?有沒有辦法改變預測的模型?

最好的問候,

回答

1

這是非常直接的。

你訓練了一個模型,第一層是2184x2184矩陣。所以你節省的體重是爲2184輸入而訓練的,並且它們適合你訓練的輸入類型。

如果我理解正確的話,你想這個矩陣應用於552長度輸入...你正在建設一個模型,其中的第一層是552x552矩陣,你想一個2184x2184矩陣加載到它... ...有是沒有辦法做到這一點......這不會奏效,你的輸入應該是完全相同的大小。您無法更改訓練有素的模型。

我希望你明白爲什麼它不起作用:-)如果沒有,請詢​​問澄清

+0

謝謝@NassimBen的答案。我得到了,但我想做相反的事(模型訓練2184 x 2184,而測試輸入(預測)是552x552)。在我將放大的輸入放大到與模型所需尺寸相同的尺寸後,我得到了答案。因此,現在我試圖用相同的尺寸製作訓練和測試的數據。所以,我不會遇到這樣的問題。 – sakurami

相關問題