我試圖在python中實現漸變下降,並且隨着每次迭代,我的損失/成本不斷增加。Python漸變下降 - 成本不斷增加
我見過幾個人張貼關於這一點,在這裏看到一個答案:gradient descent using python and numpy
我相信我的實現是相似的,但不能看到我在做什麼錯得到一個爆炸性的成本值:
Iteration: 1 | Cost: 697361.660000
Iteration: 2 | Cost: 42325117406694536.000000
Iteration: 3 | Cost: 2582619233752172973298548736.000000
Iteration: 4 | Cost: 157587870187822131053636619678439702528.000000
Iteration: 5 | Cost: 9615794890267613993157742129590663647488278265856.000000
我對我在網上找到的數據集(LA心臟數據)測試此:http://www.umass.edu/statdata/statdata/stat-corr.html
導入代碼:
dataset = np.genfromtxt('heart.csv', delimiter=",")
x = dataset[:]
x = np.insert(x,0,1,axis=1) # Add 1's for bias
y = dataset[:,6]
y = np.reshape(y, (y.shape[0],1))
梯度下降:
def gradientDescent(weights, X, Y, iterations = 1000, alpha = 0.01):
theta = weights
m = Y.shape[0]
cost_history = []
for i in xrange(iterations):
residuals, cost = calculateCost(theta, X, Y)
gradient = (float(1)/m) * np.dot(residuals.T, X).T
theta = theta - (alpha * gradient)
# Store the cost for this iteration
cost_history.append(cost)
print "Iteration: %d | Cost: %f" % (i+1, cost)
計算成本:
def calculateCost(weights, X, Y):
m = Y.shape[0]
residuals = h(weights, X) - Y
squared_error = np.dot(residuals.T, residuals)
return residuals, float(1)/(2*m) * squared_error
計算假設:
def h(weights, X):
return np.dot(X, weights)
要實際運行它:
gradientDescent(np.ones((x.shape[1],1)), x, y, 5)
我最好的辦法是微不足道的簽署問題,因爲它看起來好像走錯了方向。 –