我想R中使用隨機梯度下降,以建立自己的迴歸函數,但我現在所擁有的使權重成長過程中沒有約束,因此從來沒有停止:R中迴歸公式的執行
# Logistic regression
# Takes training example vector, output vector, learn rate scalar, and convergence delta limit scalar
my_logr <- function(training_examples,training_outputs,learn_rate,conv_lim) {
# Initialize gradient vector
gradient <- as.vector(rep(0,NCOL(training_examples)))
# Difference between weights
del_weights <- as.matrix(1)
# Weights
weights <- as.matrix(runif(NCOL(training_examples)))
weights_old <- as.matrix(rep(0,NCOL(training_examples)))
# Compute gradient
while(norm(del_weights) > conv_lim) {
for (k in 1:NROW(training_examples)) {
gradient <- gradient + 1/NROW(training_examples)*
((t(training_outputs[k]*training_examples[k,]
/(1+exp(training_outputs[k]*t(weights)%*%as.numeric(training_examples[k,]))))))
}
# Update weights
weights <- weights_old - learn_rate*gradient
del_weights <- as.matrix(weights_old - weights)
weights_old <- weights
print(weights)
}
return(weights)
}
的功能可以用下面的代碼進行測試:
data(iris) # Iris data already present in R
# Dataset for part a (first 50 vs. last 100)
iris_a <- iris
iris_a$Species <- as.integer(iris_a$Species)
# Convert list to binary class
for (i in 1:NROW(iris_a$Species)) {if (iris_a$Species[i] != "1") {iris_a$Species[i] <- -1}}
random_sample <- sample(1:NROW(iris),50)
weights_a <- my_logr(iris_a[random_sample,1:4],iris_a$Species[random_sample],1,.1)
我雙重檢查我的針對Abu-Mostafa's算法,其如下:
- 初始化權重向量
- 對於每個時間段計算梯度:
gradient <- -1/N * sum_{1 to N} (training_answer_n * training_Vector_n/(1 + exp(training_answer_n * dot(weight,training_vector_n))))
weight_new <- weight - learn_rate*gradient
- 重複,直到體重增量足夠小
我失去了一些東西在這裏?
我是否缺少權重的標準化術語?這是一個交叉驗證的問題,也許? – 2013-03-18 13:50:48
從數學的角度來看,權重向量的無約束幅度不會產生獨特的解決方案。 - 權重/規範(權重)' ... '的權重< - weights_old - learn_rate * gradient' '權重 '權重<:當我加入這兩行分類器函數,它在兩個步驟會聚< - 權重/規範(權重)' – 2013-03-18 13:58:24
下面的答案有幫助嗎? – 2013-03-18 16:43:21