我想運行一個簡單的前饋神經網絡,我的訓練和測試的準確性似乎在整個時代都是一樣的。丟失和準確性仍然是一樣的我的訓練
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import plot_model
from IPython import embed
from keras import optimizers
from keras import backend as K
import keras
import numpy as np
import glob
import pre_process_data as dataProc
def network():
path = '/home/RD*'
files = glob.glob(path)
model = Sequential()
model.add(Dense(2048, input_shape=(10030,),name="dense_one"))
model.add(Dense(2048,activation='softmax',name = "dense_two"))
model.add(Dense(4,activation='tanh',name = "dense_three"))
#model.add(Dense(4,activation = 'relu',name = "dense_four"))
for l in model.layers:
print l.name, l.input_shape , "=======>", l.output_shape
print model.summary()
model.compile(loss = 'categorical_crossentropy',
optimizer = "adam",
metrics = ['accuracy'])
#Reads data from text files
pre_proc_data = dataProc.OnlyBestuData()
data = pre_proc_data[:,0:-1]
labels = pre_proc_data[:, -1]
labels = np.random.randint(0,4,32) #Generate random lables
one_hot_labels = keras.utils.to_categorical(labels, num_classes=4)
model.fit(x = data,y = one_hot_labels ,epochs = 10, batch_size = 2, verbose = 2)
#embed()
def main():
with K.get_session():
network()
main()
我的輸出粘貼在下面。我想了解神經網絡是如何工作的,因此我寫了一個非常簡單的前饋網絡。我嘗試將優化器從「adam」更改爲「SGD」,學習率爲0.01。然而,我的網絡給了我不斷的損失和準確性。由於這是一個小型網絡,整個輸入的尺寸是32x10030,其中每行是一組聯合位置。
你能告訴我在這裏會出現什麼問題嗎?
輸出:
Epoch 1/10
5s - loss: 9.6919 - acc: 0.4375
Epoch 2/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 3/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 4/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 5/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 6/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 7/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 8/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 9/10
5s - loss: 9.5701 - acc: 0.4688
Epoch 10/10
5s - loss: 9.5701 - acc: 0.4688
爲什麼你隨機生成你的標籤?如果僅僅是爲了測試 - 爲什麼你不堅持一個普通的數據集來獲得基線? – petezurich
是的,這樣做更有意義。我會在共同基線上嘗試這一點。只是想確保我的代碼的骨架是正確的。 – deeplearning
我還注意到另一件事:您的最後一層應該設置softmax激活。 ReLu沒有意義,因爲它是«決定»層。 – petezurich