我一直在努力學習人工智能,但我有關於它的以下疑問。神經網絡的疑惑python代碼
def sigmoid(x):
return 1/(1+np.exp(-x))
# Derivative of the sigmoid function
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
x = np.array([0.1, 0.3])
y = 0.2
weights = np.array([-0.8, 0.5])
# The learning rate, eta in the weight step equation
learnrate = 0.5
# The neural network output
nn_output = sigmoid(x[0]*weights[0] + x[1]*weights[1])
# or nn_output = sigmoid(np.dot(x, w))
# output error
error = y - nn_output
# error gradient
error_grad = error * sigmoid_prime(np.dot(x,w))
# Gradient descent step
del_w = [ learnrate * error_grad * x[0],
learnrate * error_grad * x[1]]
# or del_w = learnrate * error_grad * x
疑惑:
我們爲什麼要乘以權重,只有X,而不是與Y'
nn_output = sigmoid(x[0]*weights[0] + x[1]*weights[1])
爲什麼我們在計算梯度下降時增加x的值?
del_w = [ learnrate * error_grad * x[0], learnrate * error_grad * x[1]]
像x[0]
和x[1]
你可以看看這個和比較你的代碼:https://seat.massey.ac.nz/personal/srmarsland/Code/Ch3/pcn.py – ppasler
你是什麼意思,增加' x'? – Amadan