2017-10-11 93 views
3

下面的網絡代碼工作正常,但速度太慢。 This site意味着網絡在學習率爲0.2的100個時期後應該達到99%的準確率,而即使在1900年以後,我的網絡也從未超過97%。爲什麼這個簡單的神經網絡不能收斂到XOR?

Epoch 0, Inputs [0 0], Outputs [-0.83054376], Targets [0] 
Epoch 100, Inputs [0 1], Outputs [ 0.72563824], Targets [1] 
Epoch 200, Inputs [1 0], Outputs [ 0.87570863], Targets [1] 
Epoch 300, Inputs [0 1], Outputs [ 0.90996706], Targets [1] 
Epoch 400, Inputs [1 1], Outputs [ 0.00204791], Targets [0] 
Epoch 500, Inputs [0 1], Outputs [ 0.93396672], Targets [1] 
Epoch 600, Inputs [0 0], Outputs [ 0.00006375], Targets [0] 
Epoch 700, Inputs [0 1], Outputs [ 0.94778227], Targets [1] 
Epoch 800, Inputs [1 1], Outputs [-0.00149935], Targets [0] 
Epoch 900, Inputs [0 0], Outputs [-0.00122716], Targets [0] 
Epoch 1000, Inputs [0 0], Outputs [ 0.00457281], Targets [0] 
Epoch 1100, Inputs [0 1], Outputs [ 0.95921556], Targets [1] 
Epoch 1200, Inputs [0 1], Outputs [ 0.96001748], Targets [1] 
Epoch 1300, Inputs [1 0], Outputs [ 0.96071742], Targets [1] 
Epoch 1400, Inputs [1 1], Outputs [ 0.00110912], Targets [0] 
Epoch 1500, Inputs [0 0], Outputs [-0.00], Targets [0] 
Epoch 1600, Inputs [1 0], Outputs [ 0.9640324], Targets [1] 
Epoch 1700, Inputs [1 0], Outputs [ 0.96431516], Targets [1] 
Epoch 1800, Inputs [0 1], Outputs [ 0.97004973], Targets [1] 
Epoch 1900, Inputs [1 0], Outputs [ 0.96616225], Targets [1] 

我使用的數據集:

0 0 0 
1 0 1 
0 1 1 
1 1 1 

訓練集是使用一個輔助文件中的函數讀取,但就是不相關的網絡。

import numpy as np 
import helper 

FILE_NAME = 'data.txt' 
EPOCHS = 2000 
TESTING_FREQ = 5 
LEARNING_RATE = 0.2 

INPUT_SIZE = 2 
HIDDEN_LAYERS = [5] 
OUTPUT_SIZE = 1 


class Classifier: 
    def __init__(self, layer_sizes): 
     np.set_printoptions(suppress=True) 

     self.activ = helper.tanh 
     self.dactiv = helper.dtanh 

     network = list() 
     for i in range(1, len(layer_sizes)): 
      layer = dict() 
      layer['weights'] = np.random.randn(layer_sizes[i], layer_sizes[i-1]) 
      layer['biases'] = np.random.randn(layer_sizes[i]) 
      network.append(layer) 

     self.network = network 

    def forward_propagate(self, x): 
     for i in range(0, len(self.network)): 
      self.network[i]['outputs'] = self.network[i]['weights'].dot(x) + self.network[i]['biases'] 
      if i != len(self.network)-1: 
       self.network[i]['outputs'] = x = self.activ(self.network[i]['outputs']) 
      else: 
       self.network[i]['outputs'] = self.activ(self.network[i]['outputs']) 
     return self.network[-1]['outputs'] 

    def backpropagate_error(self, x, targets): 
     self.forward_propagate(x) 
     self.network[-1]['deltas'] = (self.network[-1]['outputs'] - targets) * self.dactiv(self.network[-1]['outputs']) 
     for i in reversed(range(len(self.network)-1)): 
      self.network[i]['deltas'] = self.network[i+1]['deltas'].dot(self.network[i+1]['weights'] * self.dactiv(self.network[i]['outputs'])) 

    def adjust_weights(self, inputs, learning_rate): 
     self.network[0]['weights'] -= learning_rate * np.atleast_2d(self.network[0]['deltas']).T.dot(np.atleast_2d(inputs)) 
     self.network[0]['biases'] -= learning_rate * self.network[0]['deltas'] 
     for i in range(1, len(self.network)): 
      self.network[i]['weights'] -= learning_rate * np.atleast_2d(self.network[i]['deltas']).T.dot(np.atleast_2d(self.network[i-1]['outputs'])) 
      self.network[i]['biases'] -= learning_rate * self.network[i]['deltas'] 

    def train(self, inputs, targets, epochs, testfreq, lrate): 
     for epoch in range(epochs): 
      i = np.random.randint(0, len(inputs)) 
      if epoch % testfreq == 0: 
       predictions = self.forward_propagate(inputs[i]) 
       print('Epoch %s, Inputs %s, Outputs %s, Targets %s' % (epoch, inputs[i], predictions, targets[i])) 
      self.backpropagate_error(inputs[i], targets[i]) 
      self.adjust_weights(inputs[i], lrate) 


inputs, outputs = helper.readInput(FILE_NAME, INPUT_SIZE, OUTPUT_SIZE) 
print('Input data: {0}'.format(inputs)) 
print('Output targets: {0}\n'.format(outputs)) 
np.random.seed(1) 

nn = Classifier([INPUT_SIZE] + HIDDEN_LAYERS + [OUTPUT_SIZE]) 

nn.train(inputs, outputs, EPOCHS, TESTING_FREQ, LEARNING_RATE) 
+0

您是否嘗試其他學習率? 0.2可能太低,而且會變得不穩定。 – eventHandler

+0

@eventHandler我已更新帖子。它基於基準不足夠快或足夠準確地收斂:https://stackoverflow.com/questions/30688527/how-many-epochs-should-a-neural-net-need-to-learn-to-square-testing -results-in –

回答

1

主要錯誤是,你正在做的直傳的時候只有20%,即當epoch % testfreq == 0

for epoch in range(epochs): 
    i = np.random.randint(0, len(inputs)) 
    if epoch % testfreq == 0: 
    predictions = self.forward_propagate(inputs[i]) 
    print('Epoch %s, Inputs %s, Outputs %s, Targets %s' % (epoch, inputs[i], predictions, targets[i])) 
    self.backpropagate_error(inputs[i], targets[i]) 
    self.adjust_weights(inputs[i], lrate) 

當我把predictions = self.forward_propagate(inputs[i])if,我得到更好的結果更快:

Epoch 100, Inputs [0 1], Outputs [ 0.80317447], Targets 1 
Epoch 105, Inputs [1 1], Outputs [ 0.96340466], Targets 1 
Epoch 110, Inputs [1 1], Outputs [ 0.96057278], Targets 1 
Epoch 115, Inputs [1 0], Outputs [ 0.87960599], Targets 1 
Epoch 120, Inputs [1 1], Outputs [ 0.97725825], Targets 1 
Epoch 125, Inputs [1 0], Outputs [ 0.89433666], Targets 1 
Epoch 130, Inputs [0 0], Outputs [ 0.03539024], Targets 0 
Epoch 135, Inputs [0 1], Outputs [ 0.92888141], Targets 1 

另外,注意的是,術語劃時代通常意味着的一次運行全部你的訓練數據,在你的案例4.所以,實際上,你正在做的時間少4倍。

更新

我沒注意的細節,結果,錯過了一些細微但重要的注意事項:

  • 訓練數據的問題表示OR,不XOR,所以我上面的結果是用於學習的操作;
  • 反向傳遞也執行正向傳遞(所以它不是一個錯誤,而是一個令人驚訝的實現細節)。

知道了這一點,我已經更新了數據並再次檢查了腳本。運行10000次迭代的訓練給出了〜0.001的平均誤差,所以模型學習,只是沒有那麼快,儘可能的。

一個簡單的神經網絡(沒有嵌入規範化機制)對特定的超參數非常敏感,比如初始化和學習速率。我嘗試了各種值手動和這裏是我得到了什麼:

# slightly bigger learning rate 
LEARNING_RATE = 0.3 
... 
# slightly bigger init variation of weights 
layer['weights'] = np.random.randn(layer_sizes[i], layer_sizes[i-1]) * 2.0 

這給出了以下性能:

... 
Epoch 960, Inputs [1 1], Outputs [ 0.01392014], Targets 0 
Epoch 970, Inputs [0 0], Outputs [ 0.04342895], Targets 0 
Epoch 980, Inputs [1 0], Outputs [ 0.96471654], Targets 1 
Epoch 990, Inputs [1 1], Outputs [ 0.00084511], Targets 0 
Epoch 1000, Inputs [0 0], Outputs [ 0.01585915], Targets 0 
Epoch 1010, Inputs [1 1], Outputs [-0.004097], Targets 0 
Epoch 1020, Inputs [1 1], Outputs [ 0.01898956], Targets 0 
Epoch 1030, Inputs [0 0], Outputs [ 0.01254217], Targets 0 
Epoch 1040, Inputs [1 1], Outputs [ 0.01429213], Targets 0 
Epoch 1050, Inputs [0 1], Outputs [ 0.98293925], Targets 1 
... 
Epoch 1920, Inputs [1 1], Outputs [-0.00043072], Targets 0 
Epoch 1930, Inputs [0 1], Outputs [ 0.98544288], Targets 1 
Epoch 1940, Inputs [1 0], Outputs [ 0.97682002], Targets 1 
Epoch 1950, Inputs [1 0], Outputs [ 0.97684186], Targets 1 
Epoch 1960, Inputs [0 0], Outputs [-0.00141565], Targets 0 
Epoch 1970, Inputs [0 0], Outputs [-0.00097559], Targets 0 
Epoch 1980, Inputs [0 1], Outputs [ 0.98548381], Targets 1 
Epoch 1990, Inputs [1 0], Outputs [ 0.97721286], Targets 1 

平均準確靠近後1000次迭代98.5%和後99.1% 2000次迭代。這比承諾的速度慢一些,但足夠好。我相信它可以進一步調整,但這不是玩具演習的目標。畢竟,tanh is not最好的激活函數,並且分類問題應該更好地用交叉熵損失(而不是L2損失)來解決。所以我不會太擔心這個特定網絡的性能,並繼續進行邏輯迴歸。就學習速度而言,這肯定會更好。

+0

我相信backpropagate()運行一個前鋒傳球 - 不是嗎? –

+0

你正在訓練或不是XOR('輸入[1 1] ...目標1')。但誠然,OP在他們的數據集中描述了一個OR邏輯。看着OP的輸出('Inputs [1 1] ... Targets [0]'),他們正在訓練XOR,正如他們在標題中所說的那樣。 – swenzel

+0

謝謝你們,我的不好,我已經更新了答案。 – Maxim