對於我的論文,我運行了一個4層深度網絡,用於序列轉換用例 150 x Conv(64,5)x GRU(100)x softmax在loss ='categorical_crossentropy'的最後一個階段激活。深度學習:包含keras的小數據集:局部最小值
訓練損失和準確性最優化地相當快地收斂 其中驗證損失和準確性似乎卡在val_acc 97至98.2範圍內,無法超越此範圍。
我的模型是否過擬合?
嘗試在圖層之間丟失0.2。
Output after drop-out
Epoch 85/250
[==============================] - 3s - loss: 0.0057 - acc: 0.9996 - val_loss: 0.2249 - val_acc: 0.9774
Epoch 86/250
[==============================] - 3s - loss: 0.0043 - acc: 0.9987 - val_loss: 0.2063 - val_acc: 0.9774
Epoch 87/250
[==============================] - 3s - loss: 0.0039 - acc: 0.9987 - val_loss: 0.2180 - val_acc: 0.9809
Epoch 88/250
[==============================] - 3s - loss: 0.0075 - acc: 0.9978 - val_loss: 0.2272 - val_acc: 0.9774
Epoch 89/250
[==============================] - 3s - loss: 0.0078 - acc: 0.9974 - val_loss: 0.2265 - val_acc: 0.9774
Epoch 90/250
[==============================] - 3s - loss: 0.0027 - acc: 0.9996 - val_loss: 0.2212 - val_acc: 0.9809
Epoch 91/250
[==============================] - 3s - loss: 3.2185e-04 - acc: 1.0000 - val_loss: 0.2190 - val_acc: 0.9809
Epoch 92/250
[==============================] - 3s - loss: 0.0020 - acc: 0.9991 - val_loss: 0.2239 - val_acc: 0.9792
Epoch 93/250
[==============================] - 3s - loss: 0.0047 - acc: 0.9987 - val_loss: 0.2163 - val_acc: 0.9809
Epoch 94/250
[==============================] - 3s - loss: 2.1863e-04 - acc: 1.0000 - val_loss: 0.2190 - val_acc: 0.9809
Epoch 95/250
[==============================] - 3s - loss: 0.0011 - acc: 0.9996 - val_loss: 0.2190 - val_acc: 0.9809
Epoch 96/250
[==============================] - 3s - loss: 0.0040 - acc: 0.9987 - val_loss: 0.2289 - val_acc: 0.9792
Epoch 97/250
[==============================] - 3s - loss: 2.9621e-04 - acc: 1.0000 - val_loss: 0.2360 - val_acc: 0.9792
Epoch 98/250
[==============================] - 3s - loss: 4.3776e-04 - acc: 1.0000 - val_loss: 0.2437 - val_acc: 0.9774
花費時間檢查模型上的不同排列。你對網絡使用部分學習能力的觀察是正確的。嘗試了各種模型容量減少百分比來驗證。在較低的能力下,培訓acc和驗證準確性同時進行。您對具有獨特模式的驗證集的第二次觀察也是如此,需要花時間手動驗證兩個數據集 – Ajay