2013-05-05 70 views
0

當試圖訓練它做'和'模式時,我試圖在神經網絡上運行我的反向傳播時出現類型錯誤。python 3.23執行反向傳播,在列表上輸入錯誤

只是不明白,我不是任何人請求讀取或審查我的代碼..

我只是給了一堆的,因爲我不是真的一定是什麼導致這個錯誤。

因爲我一直在測試它,所以我在backprop函數中包含了一堆打印件。

我的源文件全部發布在:my github

這裏是在命令行中顯示的內容:

$python main.py 
enter a filename: params.dat 
max_iterations: 100, error_threshhold: 0.001000, netError: 1.001000, n_iterations: 0 
eval of while loop: True 
1backProp iteration = 0, netError = 1.001000 
2backProp iteration = 0, netError = 1.001000, inputsForWeightChangeLoop: 
[0, 1] 
3backProp iteration = 0, netError = 1.001000, inputsForWeightChangeLoop: 
[0, 1] 
4backProp iteration = 0, netError = 1.001000, inputsForWeightChangeLoop: 
[] 
5backProp, oldInputsWeightChange: 
[0, 1] 
6backProp, inputNN.layers[j].neurons[k]: 
<neuralNet.neuron object at 0x7f405e97be90> 
8backProp, y(stuff): 
0.7615941559557649 
9backProp, y(stuff): 
0.7615941559557649 
Traceback (most recent call last): 
    File "main.py", line 51, in <module> 
    if __name__ == "__main__": main() 
    File "main.py", line 41, in main 
    backProp(inputNeuralNet, dStruct['input'], dStruct['target'], dStruct['max_iterations'], dStruct['error_threshhold'], dStruct['rateOfLearning']) 
    File "/home/nab/Documents/cpat_project-master/propagate.py", line 66, in backProp 
    inputsForWeightChangeLoop.append(float(y(oldInputsWeightChange, inputNN.layers[j].neurons[k]))) 
TypeError: 'list' object is not callable 

基本上,它給在那裏我試圖收集一層的輸出,因此我的體重變化循環的下一次迭代過程中一個類型錯誤,我可以計算神經元的誤差值,然後改變權重。

我的問題,基本上是如何計算這些神經元的輸出而不會得到這種類型的錯誤。

下面是反向傳播的代碼:

""" 
backProp takes a neural network (inputNN), a set of input training values (input), 
a number of maximum allowed iterations (max_iterations), and a threshold for the 
calculated error values, this last value is used as a way to tell when the network 
has been sufficiently trained. back propagation is an algorithm for training a 
neural network. 
""" 

def backProp(inputNN, input, targets, max_iterations, error_threshhold, learningRate): 
    n_iterations = 0 # counter for the number of propagation loops 
    netError = float(error_threshhold + 1.0) 
    print('max_iterations: %d, error_threshhold: %f, netError: %f, n_iterations: %d' % (max_iterations, error_threshhold, netError, n_iterations)) 
    print('eval of while loop: %s' % (n_iterations < max_iterations and netError > error_threshhold)) 
    while ((n_iterations < max_iterations) and (netError > error_threshhold)): 
     print('1backProp iteration = %d, netError = %f' % (n_iterations, netError)) 
     for i in input: 
      y = inputNN.update(i) # present the pattern to the network 
      outputLayerError = errorGradientOutputLayer(sum(y), targets[n_iterations]) #calc the error signal, assumes that output layer has only 1 node. 
      newWeights = [] # to collect new weights for updating the neurons 
      inputsForWeightChangeLoop = i # this is actually to collect outputs for computing the weight change in hidden layers, which are then used as inputs 
      print('2backProp iteration = %d, netError = %f, inputsForWeightChangeLoop:' % (n_iterations, netError)) 
      print(inputsForWeightChangeLoop) 
      counter = 0 # used for a condition to compute the error value in the hidden layer above the output layer. 
      layersFromOut = list(range(0, inputNN.n_hiddenLayers + 1)) # this is in order to get the reverse of a list to do a backwards propagation, + 1 for input layer 
      layersFromOut.reverse() # reverses the list 
      error2DArray = [] # this collects error values for use in the change of the weights 
      for j in layersFromOut: # for every layer, starting with the hidden layer closest to output. 
       for k in range(0, inputNN.layers[j].n_neurons): # for every neuron in the layer 
        if counter != 0: # if the neuron isn't in the hidden layer above the output 
         error2DArray.append(errorGradientHiddenLayer(k, j, inputNN, error2DArray[j + 1])) # compute the error gradient for the neuron 
        else: 
         error2DArray.append(errorGradientHiddenLayer(k, j, inputNN, [outputLayerError])) # '' same but for the hidden layer above the output layer 
       counter += 1 
      for j in range(0, inputNN.n_hiddenLayers + 2): # for every layer, + 2 in range for output and input layers. 
       for k in range(0, inputNN.layers[j].n_neurons): # for every neuron in the layer 
        newWeights = [] 
        for h in range(0, inputNN.layers[j].neurons[k].n_inputs): #for every weight in the neuron 
#params for deltaWeight -- deltaWeight(float oldWeight, float learningRate, list[float] inputsToNeuron, list[float] errorValues, float derivitiveOfActivationFn) 
         newWeights.append(deltaWeight(inputNN.layers[j].neurons[k].l_weights[h], learningRate, inputsForWeightChangeLoop[h], error2DArray[j], derivActivation(inputsForWeightChangeLoop, inputNN.layers[j].neurons[k]))) # get the change in weight 
        inputNN.layers[j].neurons[k].putWeights(newWeights) #update the weights 
       print('3backProp iteration = %d, netError = %f, inputsForWeightChangeLoop:' % (n_iterations, netError)) 
       print(inputsForWeightChangeLoop) 
       oldInputsWeightChange = inputsForWeightChangeLoop # this is used to calculate the new inputs for the change in weight 
       inputsForWeightChangeLoop = [] # clear it to re-populate 
       for k in range(0, inputNN.layers[j].n_neurons): # for every neuron in the layer 
        print('4backProp iteration = %d, netError = %f, inputsForWeightChangeLoop:' % (n_iterations, netError)) 
        print(inputsForWeightChangeLoop) 
        print('5backProp, oldInputsWeightChange:') 
        print(oldInputsWeightChange) 
        print('6backProp, inputNN.layers[j].neurons[k]:') 
        print(inputNN.layers[j].neurons[k]) 
        print('8backProp, y(stuff):') 
        print(float(math.e**activation(oldInputsWeightChange, inputNN.layers[j].neurons[k]) - math.e**((-1) * activation(oldInputsWeightChange, inputNN.layers[j].neurons[k])))/float(math.e**activation(oldInputsWeightChange, inputNN.layers[j].neurons[k]) + math.e**((-1) * activation(oldInputsWeightChange, inputNN.layers[j].neurons[k])))) 
        print('9backProp, y(stuff):') 
        print(sigmoid(activation(oldInputsWeightChange, inputNN.layers[j].neurons[k]))) 
        #print('7backProp, y(stuff):') 
        #print(y(oldInputsWeightChange, inputNN.layers[j].neurons[k])) 
        inputsForWeightChangeLoop.append(float(y(oldInputsWeightChange, inputNN.layers[j].neurons[k]))) 
        #inputsForWeightChangeLoop.append(y(oldInputsWeightChange, inputNN.layers[j].neurons[k])) # calculate the new inputs 
      n_iterations += 1 
      errorVal = 0# sum unit for the net error 
      for j in range(0, len(input)): # for every pattern in the training set 
       for k in range(0, len(inputNN.layers[-1].n_neurons)): # for every output to the net 
        errorVal += errorSignal(targets[k], y[k]) 
      netError = .5 * errorVal #calc the error fn for the net? 
      print('5backProp iteration = %d, netError = %f' % (n_iterations, netError)) 
     # 
    print('propagate finished with %d iterations and %f net error' % (n_iterations, netError)) 
    return 

這是我的函數y,其可以是表示一個節點的輸出的卷積方式:

""" 
y takes a set of patterns or inputs (p), and a neuron (n) and returns the 
output for the specified node in the neural net. [keep in mind that the 
input of some neuron is really in terms of the layer above it.] 
""" 
def y(p, n): 
    if (len(p) != n.n_inputs): # if the node has a different number of inputs than specified in params, throw error. 
     raise ValueError('wrong number of inputs: y(p, n) in propagate.') 
    return sigmoid(activation(p, n)) 

和我的乙狀結腸:

""" 
sigmoid takes an activation value (activation) and calculates the sigmoid 
function on the activation value. [here I use the tanh function] 
""" 
def sigmoid(activation): 
    return float(math.e**activation - math.e**((-1) * activation))/float(math.e**activation + math.e**((-1) * activation)) 

最後,我的激活:

""" 
activation takes a neuron (n) and a set of patterns or inputs (p) and returns 
the activation value of the neuron on that input pattern. 
""" 
def activation(p, n): 
    activationValue = 0 
    for i in range(0, len(p)): 
     activationValue += p[i] * n.l_weights[i] 
    activationValue += (-1) * n.l_weights[-1] # threshhold? 
    return activationValue 

我真的不知道到底如何我的代碼更是必要的,所以我會繼續前進,包括下面的整個神經網絡模塊..

""" 
    neuralNet.py 
    4/21/13, 5:30p 

""" 

import sys 
import random 
import math 
import propagate 


class neuron(): 
    n_inputs = 0 
    l_weights = [] 

    def __init__(self, numberOfInputs): 
     self.l_weights = [] 
     self.n_inputs = numberOfInputs 
     for i in range(0,(numberOfInputs + 1)): #for each input + threshhold 
      self.l_weights.append(random.randint(-1,1)) 

    # 
    def putWeights(self, weights): 
     for i in range(0, len(weights)): 
      self.l_weights[i] = weights[i] 

class neuralNetLayer(): 
    n_neurons = 0 
    neurons = [] 

    def __init__(self, numNeurons, numInputsPerNeuron): 
     self.neurons = [] 
     self.n_neurons = numNeurons 
     for i in range(0, numNeurons): 
      #print('neuralNetLayer -> length of self.neurons: %d' % len(self.neurons)) 
      #print("neural net layer makes a neuron -> %d" % i) 
      self.neurons.append(neuron(numInputsPerNeuron)) 

    def getWeights(self): 
     weights = [] 
     for i in range(0, self.n_neurons): 
      i_weights = [] 
      for j in range(0, len(self.neurons[i].l_weights)): 
       i_weights.append(self.neurons[i].l_weights[j]) 
      weights.append(i_weights) 
     return weights 

class neuralNet(): 
    n_inputs = 0 
    n_outputs = 0 
    n_hiddenLayers = 0 
    n_neuronsPerHiddenLyr = 0 
    layers = [] 

    def __init__(self, numInputs, numOutputs, numHidden, numNeuronsPerHidden): 
     self.layers = [] 
     self.n_inputs = numInputs 
     self.n_outputs = numOutputs 
     self.n_hiddenLayers = numHidden 
     self.n_neuronsPerHiddenLyr = numNeuronsPerHidden 
     #print('making input layer with %d neurons and %d inputs to the neurons' % (numInputs, numInputs)) 
     self.layers.append(neuralNetLayer(numInputs, numInputs))# make input layer 
     for i in range(0, self.n_hiddenLayers): 
      #print('making hidden layer with %d neurons and %d inputs to the neurons' % (numNeuronsPerHidden, numNeuronsPerHidden)) 
      self.layers.append(neuralNetLayer(numNeuronsPerHidden, numNeuronsPerHidden))# make hidden layers 
     if numHidden > 0: # if you have hidden neurons, output will connect to them 
      #print('making output layer with %d neurons and %d inputs to the neurons' % (numOutputs, numNeuronsPerHidden)) 
      self.layers.append(neuralNetLayer(numOutputs, numNeuronsPerHidden)) 
     else: 
      #print('making output layer with %d neurons and %d inputs to the neurons' % (numOutputs, numInputs)) 
      self.layers.append(neuralNetLayer(numOutputs, numInputs))# make output layer connect to input layer 

    #returns a list of the weights in the net 
    def getWeights(self): 
     weights = [] 
     for i in range(0, self.n_hiddenLayers + 1): #+ 1 because output layer 
      for j in range(0, self.layers[i].n_neurons + 1): 
       for k in range(0, self.layers[i].neurons[j].n_inputs + 1): 
        weights.append(self.layers[i].neurons[j].l_weights[k]) 
     return weights 

    #replaces the weights in the net with the given values 
    def putWeights(self, weights): 
     counter = 0 
     for i in range(0, self.n_hiddenLayers + 1): 
      for j in range(0, self.layers[i].n_neurons + 1): 
       self.layers[i].neurons[j].putweights(weights[i][j]) 

    #returns the number of weights in the net 
    def getNumWeights(self): 
     num = 0 
     for i in range(0, self.n_hiddenLayers + 1): 
      for j in range(0, self.layers[i].n_neurons): 
       for k in range(self.layers[i].neurons[j].n_inputs + 1): 
        num += 1 
     return num 

    # given some inputs, returns the output of the net 
    def update(self, inputs): 
     if (len(inputs) != self.n_inputs): 
      raise ValueError('wrong number of inputs: update() in neuralNet.') 
     for i in range(0, self.n_hiddenLayers + 1): # I need to do this for every hidden layer + input layer. 
      outputs = [] 
      for j in range(0, self.layers[i].n_neurons): 
       if i != 0:# if current layer is not input layer 
        outputs.append(propagate.y(outputPriorLayer, self.layers[i].neurons[j])) 
       else: 
        outputs.append(propagate.y(inputs, self.layers[i].neurons[j])) 
      outputPriorLayer = outputs 
     return outputs[0:len(self.layers[-1].neurons)] 

回答

1

你定義另一個y變量在方法:

y = inputNN.update(i) # present the pattern to the network 

我還沒有在源湊近仔細看,但似乎這樣的變量設置只是有時,它可以讓你的代碼運行時細的部分。您必須選擇與您的y功能不衝突的另一個名稱。

+0

噢,我的上帝,完全可以,非常感謝你。我會研究一下.. – Yorgi 2013-05-05 07:54:58

+0

更改了變量名稱,就是這樣。再次感謝您的幫助! – Yorgi 2013-05-05 11:26:48