2
我試圖找出對多元梯度下降算法的Python代碼梯度下降算法的理解梯度,並已發現了幾個幾種實現這樣的:在NumPy的
import numpy as np
# m denotes the number of examples here, not the number of features
def gradientDescent(x, y, theta, alpha, m, numIterations):
xTrans = x.transpose()
for i in range(0, numIterations):
hypothesis = np.dot(x, theta)
loss = hypothesis - y
cost = np.sum(loss ** 2)/(2 * m)
print("Iteration %d | Cost: %f" % (i, cost))
# avg gradient per example
gradient = np.dot(xTrans, loss)/m
# update
theta = theta - alpha * gradient
return theta
從梯度下降的定義,梯度下降的表達式爲:
然而,在numpy的,它被計算爲:np.dot(xTrans, loss)/m
能有人請解釋我們是如何得到這個numpy的表達?