我正在試圖訓練Keras LSTM模型來預測序列中的下一個數字。Keras模型預測序列號
- 有什麼不對低於我的模型,我怎麼調試,當一種模式是無法學習
- 我如何決定使用
- ,我應該選擇而損失和優化PARAMS憑什麼這層類型編譯
我的輸入訓練數據是形狀(16000,10)等的下面
[
[14955 14956 14957 14958 14959 14960 14961 14962 14963 14964]
[14731 14732 14733 14734 14735 14736 14737 14738 14739 14740]
[35821 35822 35823 35824 35825 35826 35827 35828 35829 35830]
[12379 12380 12381 12382 12383 12384 12385 12386 12387 12388]
...
]
Correspon定輸出的訓練數據是形狀的(16000,1)如下面
[[14965] [14741] [35831] [12389] ...]
作爲LSTM抱怨,我再成形訓練/測試數據
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)
這裏是最後的訓練/測試數據形狀
Total Samples: 20000
X_train: (16000, 10, 1)
y_train: (16000, 1)
X_test: (4000, 10, 1)
y_test: (4000, 1)
這裏是我的模型
# Model configuration
epochs = 2
batch_size = 32
hidden_neurons = 100
output_size = 1
# Create the model
model = Sequential()
model.add(LSTM(hidden_neurons, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(output_size))
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size)
scores = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
這裏是我的輸出
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_3 (LSTM) (None, 100) 40800
_________________________________________________________________
dense_3 (Dense) (None, 1) 101
=================================================================
Total params: 40,901
Trainable params: 40,901
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/2
16000/16000 [==============================] - 11s - loss: 533418575.3600 - acc: 0.0000e+00
Epoch 2/2
16000/16000 [==============================] - 10s - loss: 532474289.7280 - acc: 6.2500e-05
Model Accuracy: 0.00%
你有超過2個時代試試嗎? –
是的,我想甚至10個時代,但損失並不太大下降,精度保持0 – Mosu
這看起來像一個迴歸問題,在這種情況下,準確性是沒有意義的。 –