我一直在研究神經網絡一段時間,並用python和numpy做了一個實現。我用XOR做了一個非常簡單的例子,它運行良好。所以我想我會走得更遠,試試MNIST數據庫。神經網絡MNIST
有我的問題。我正在使用784個輸入,30個隱藏和10個輸出神經元的神經網絡。 隱藏層的激活功能只吐出一個,使網絡基本停止學習。我正在做的數學是正確的,並且相同的實現與XOR示例很好地結合,我正在正確地閱讀MNIST。所以我不明白,問題出在哪裏。
import pickle
import gzip
import numpy as np
def load_data():
f = gzip.open('mnist.pkl.gz', 'rb')
training_data, validation_data, test_data = pickle.load(f, encoding="latin1")
f.close()
return (training_data, validation_data, test_data)
def transform_output(num):
arr = np.zeros(10)
arr[num] = 1.0
return arr
def out2(arr):
return arr.argmax()
data = load_data()
training_data = data[0]
training_input = np.array(training_data[0])
training_output = [transform_output(y) for y in training_data[1]]
batch_size = 10
batch_count = int(np.ceil(len(training_input)/batch_size))
input_batches = np.array_split(training_input, batch_count)
output_batches = np.array_split(training_output, batch_count)
#Sigmoid Function
def sigmoid (x):
return 1.0/(1.0 + np.exp(-x))
#Derivative of Sigmoid Function
def derivatives_sigmoid(x):
return x * (1.0 - x)
#Variable initialization
epoch=1 #Setting training iterations
lr=2.0 #Setting learning rate
inputlayer_neurons = len(training_input[0]) #number of features in data set
hiddenlayer_neurons = 30 #number of hidden layers neurons
output_neurons = len(training_output[0]) #number of neurons at output layer
#weight and bias initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bout=np.random.uniform(size=(1,output_neurons))
for i in range(epoch):
for batch in range(batch_count):
X = input_batches[batch]
y = output_batches[batch]
zh1 = np.dot(X, wh)
zh = zh1 + bh
# data -> hidden neurons -> activations
ah = sigmoid(zh)
zo1 = np.dot(ah, wout)
zo = zo1 + bout
output = sigmoid(zo)
# data -> output neurons -> error
E = y - output
print("debugging")
print("X")
print(X)
print("WH")
print(wh)
print("zh1")
print(zh1)
print("bh")
print(bh)
print("zh")
print(zh)
print("ah")
print(ah)
print("wout")
print(wout)
print("zo1")
print(zo1)
print("bout")
print(bout)
print("zo")
print(zo)
print("out")
print(output)
print("y")
print(y)
print("error")
print(E)
# data -> output neurons -> slope
slope_out = derivatives_sigmoid(output)
# data -> output neurons -> change of error
d_out = E * slope_out
# data -> hidden neurons -> error = data -> output neurons -> change of error DOT output neurons -> output inputs (equal to hidden neurons) -> weights
error_hidden = d_out.dot(wout.T)
# data -> hidden neurons -> slope
slope_h = derivatives_sigmoid(ah)
# data -> hidden neurons -> change of error
d_hidden = error_hidden * slope_h
# hidden neurons -> output neurons -> weights = "" + hidden neurons -> data -> activations DOT data -> output neurons -> change of error
wout = wout + ah.T.dot(d_out) * lr
bout = bout + np.sum(d_out, axis=0, keepdims=True) * lr
wh = wh + X.T.dot(d_hidden) * lr
bh = bh + np.sum(d_hidden, axis=0, keepdims=True) * lr
# testing results
X = np.array(data[1][0][0:10])
zh1 = np.dot(X, wh)
zh = zh1 + bh
# data -> hidden neurons -> activations
ah = sigmoid(zh)
zo1 = np.dot(ah, wout)
zo = zo1 + bout
output = sigmoid(zo)
print([out2(y) for y in output])
print(data[1][1][0:10])
所以整體的神經網絡的輸出是每個輸入相同和不同批量大小,學習率和100個時期沒有幫助訓練它。
感謝您的快速回答。我一直在使用以下教程http://neuralnetworksanddeeplearning.com/chap1.html他使用了與我一樣的網絡,輸出神經元爲10個,sigmoid爲我,在第一個時代後成功率達到95%左右。我會通過他的代碼和我的比較。魔鬼在細節中。但是,是的,只要我有這個工作,我一定會進入softmax,輟學,relus等。 – Johannes
我實際上有點懷疑描述網絡可以使95%的準確性。他的下一個例子使用logistic迴歸(我提出的),然後是卷積神經網絡(更有可能顯示這樣的結果),可能是這個模型實際應用了。但是,如果你能證明我錯了,準確率達到90%,請告訴我,我很想親自嘗試這個網絡。 – Maxim
我運行了他的代碼,並且在第一次時代後反覆得到了90%。到目前爲止,我所看到的不同之處在於,他根據批量大小劃分了學習率,並對批次進行了隨機化,而我並沒有這樣做。但是這並不解決我的問題。我什麼都沒得到,所有的神經元激活都在隱藏和輸出層1 – Johannes