2017-09-14 40 views
2

我有一個理解R中的gradDescent package的問題。讓我們說我有一個獨立變量的數據集,我想對這些數據運行一個簡單的線性迴歸並估計一個模型,並且其參數使用分批梯度下降(GD)算法。graddescent包和lm函數不同

例如,我正在使用here中給出的數據集。第一列是自變量(X),第二列是因變量(Y)。

我寫了我自己的R代碼批量梯度下降算法。我使用的學習率爲0.01,迭代次數爲1500.估計的模型是y = -3.630291 + 1.166362 x。參數的初始值都選擇爲1.

我也想檢查我的代碼是否正常工作。我用R內置函數lm包來比較。這些參數非常接近R中線性迴歸函數給出的結果。因此,在這種情況下,通過我們的梯度下降算法獲得的線性迴歸模型是y = -3.896 + 1.193 x。

但是,最近我發現了一個R包,gradDescent,我想看看它是如何工作的。使用相同的學習率和最大迭代次數,我得到了模型y = -1.229567 + 0.9257195x的結果(這些值隨着我每次運行,因爲我設置了seed = NULL)。

GD <- function(dataTrain, alpha=0.1, maxIter=10, seed=NULL){ 
    #convert data.frame dataSet in matrix 
    dataTrain <- matrix(unlist(dataTrain), ncol=ncol(dataTrain), byrow=FALSE) 
    #shuffle data train 
    set.seed(seed) 
    dataTrain <- dataTrain[sample(nrow(dataTrain)), ] 
    set.seed(NULL) 
    #initialize theta 
    theta <- getTheta(ncol(dataTrain), seed=seed) 
    #bind 1 column to dataTrain 
    dataTrain <- cbind(1, dataTrain) 
    #parse dataTrain into input and output 
    inputData <- dataTrain[,1:ncol(dataTrain)-1] 
    outputData <- dataTrain[,ncol(dataTrain)] 
    #temporary variables 
    temporaryTheta <- matrix(ncol=length(theta), nrow=1) 
    updateRule <- matrix(0, ncol=length(theta), nrow=1) 
    #constant variables 
    rowLength <- nrow(dataTrain) 
    #loop the gradient descent 
    for(iteration in 1:maxIter){ 
     error <- (inputData %*% t(theta)) - outputData 
     for(column in 1:length(theta)){ 
      term <- error * inputData[,column] 
      #calculate gradient 
      gradient <- sum(term)/rowLength 
      updateRule[1,column] <- updateRule[1,column] + (alpha*gradient) 
      temporaryTheta[1,column] = theta[1,column] - updateRule[1,column] 
     } 
     #update all theta in the current iteration 
     theta <- temporaryTheta 
    } 
    result <- theta 
    return(result) 
} 

在這裏,給出getTheta功能,

getTheta <- function(columnLength, minTheta=0, maxTheta=1, seed=NULL){ 
    #create static random 
    set.seed(seed) 
    #random a value 
    thetaList <- runif(columnLength, min=minTheta, max=maxTheta) 
    #clear static random 
    set.seed(seed) 
    #transform into matrix 
result <- matrix(unlist(thetaList), ncol=columnLength, nrow=1, byrow=FALSE) 
    return(result) 
} 

的包裝選擇的初始值隨機。此外,它在運行GD算法之前將數據洗牌。我玩了一下。我將參數的初始值分配爲1,並停止對數據進行混洗。但是我不能認真地理解哪裏出了問題(或者是正確的),因爲我不能和我自己的GD代碼和R的lm函數得到相同的結果。請問有人能解釋一下嗎?

install.packages("gradDescent") 
library(gradDescent) 

URL_subs <-"https://raw.githubusercontent.com/ahawker/machine-learning-coursera/master/ex1/ex1data1.txt" 
data <- read.table(URL_subs, header=FALSE, sep=",") 

########## gradDescent Function ########## 
GD(data, alpha = 0.01, maxIter = 1500, seed = NULL) 
#   [,1]  [,2] 
#[1,] -1.312882 0.9281769 

########## R bulit-in function ########## 
model <- lm(data$V2~ ., data = data) 
model 
#Call: 
# lm(formula = data$V2 ~ ., data = data) 
# 
#Coefficients: 
# (Intercept)   V1 
#  -3.896  1.193 

注意:我可以提供我寫的,但基本上,我試圖理解爲什麼這個包提供了比LM包更多不同的參數估計。

編輯: 是因爲代碼中的那一行嗎?

updateRule[1,column] <- updateRule[1,column] + (alpha*gradient) 

當(在1(柱:長度(THETA)))第二循環結束時,代碼不重置updateRule矩陣但保持增加(阿爾法*梯度)的兩列矩陣在每次迭代中。我錯了嗎?

當我在迭代中找到參數更新後將此updateRule矩陣重置爲零時,我得到的模型y = -3.570819 +1.160388 x非常接近我擁有的和lm包提供的值。


EDIT 2 與gradDescent包在我原來的帖子中提到的什麼是錯的。 updateRule矩陣未被重置。我只是在循環中添加一行代碼,並沒有改變其他任何東西。 getTheta和GD函數與發佈包的作者相同。

我舉兩個例子來糾正它。我使用的第一個數據集有一個獨立變量,第二個數據集有兩個獨立變量。對於這兩個示例,我使用隨機生成的首字母,這是包中的想法。對於第二個例子,我規範化了數據,因爲輸入變量的大小順序是不同的。房屋面積(尺寸)比臥室面積大1000倍左右。

例1

URL_subs <-"https://raw.githubusercontent.com/ahawker/machine-learning-coursera/master/ex1/ex1data1.txt" 
data <- read.table(URL_subs, header=FALSE, sep=",") 

getTheta <- function(columnLength, minTheta=0, maxTheta=1, seed=NULL){ 
    #create static random 
    set.seed(seed) 
    #random a value 
    thetaList <- runif(columnLength, min=minTheta, max=maxTheta) 
    #clear static random 
    set.seed(seed) 
    #transform into matrix 
    result <- matrix(unlist(thetaList), ncol=columnLength, nrow=1, byrow=FALSE) 
    return(result) 
} 

GD <- function(dataTrain, alpha=0.1, maxIter=10, seed=NULL){ 
    #convert data.frame dataSet in matrix 
    dataTrain <- matrix(unlist(dataTrain), ncol=ncol(dataTrain), byrow=FALSE) 
    #shuffle data train 
    set.seed(seed) 
    dataTrain <- dataTrain[sample(nrow(dataTrain)), ] 
    set.seed(NULL) 
    #initialize theta 
    theta <- getTheta(ncol(dataTrain), seed=seed) 
    #bind 1 column to dataTrain 
    dataTrain <- cbind(1, dataTrain) 
    #parse dataTrain into input and output 
    inputData <- dataTrain[,1:ncol(dataTrain)-1] 
    outputData <- dataTrain[,ncol(dataTrain)] 
    #temporary variables 
    temporaryTheta <- matrix(ncol=length(theta), nrow=1) 
    updateRule <- matrix(0, ncol=length(theta), nrow=1) 
    #constant variables 
    rowLength <- nrow(dataTrain) 
    #loop the gradient descent 
    for(iteration in 1:maxIter){ 
    error <- (inputData %*% t(theta)) - outputData 
    for(column in 1:length(theta)){ 
     term <- error * inputData[,column] 
     #calculate gradient 
     gradient <- sum(term)/rowLength 
     updateRule[1,column] <- updateRule[1,column] + (alpha*gradient) 
     temporaryTheta[1,column] = theta[1,column] - updateRule[1,column] 
    } 
    updateRule <- matrix(0, ncol=length(theta), nrow=1) 
    #update all theta in the current iteration 
    theta <- temporaryTheta 
    } 
    result <- theta 
    return(result) 
} 

GD(data, alpha = 0.01, maxIter = 1500, seed = NULL) 
#   [,1] [,2] 
#[1,] -3.602297 1.16355 

########## R built-in lm function ########## 
model <- lm(data$V2~ ., data = data) 
model 
#Call: 
# lm(formula = data$V2 ~ ., data = data) 
# 
#Coefficients: 
# (Intercept)   V1 
#  -3.896  1.193 

例2

data <- read.csv("https://raw.githubusercontent.com/ethen8181/machine-learning/master/linear_regression/housing.txt", 
       header = TRUE, 
       sep = ",") 

getTheta <- function(columnLength, minTheta=0, maxTheta=1, seed=NULL){ 
    #create static random 
    set.seed(seed) 
    #random a value 
    thetaList <- runif(columnLength, min=minTheta, max=maxTheta) 
    #clear static random 
    set.seed(seed) 
    #transform into matrix 
    result <- matrix(unlist(thetaList), ncol=columnLength, nrow=1, byrow=FALSE) 
    return(result) 
} 

GD <- function(dataTrain, alpha=0.1, maxIter=10, seed=NULL){ 
    #convert data.frame dataSet in matrix 
    dataTrain <- matrix(unlist(dataTrain), ncol=ncol(dataTrain), byrow=FALSE) 
    #shuffle data train 
    set.seed(seed) 
    dataTrain <- dataTrain[sample(nrow(dataTrain)), ] 
    set.seed(NULL) 
    #initialize theta 
    theta <- getTheta(ncol(dataTrain), seed=seed) 
    #bind 1 column to dataTrain 
    dataTrain <- cbind(1, dataTrain) 
    #parse dataTrain into input and output 
    inputData <- dataTrain[,1:ncol(dataTrain)-1] 
    outputData <- dataTrain[,ncol(dataTrain)] 
    #temporary variables 
    temporaryTheta <- matrix(ncol=length(theta), nrow=1) 
    updateRule <- matrix(0, ncol=length(theta), nrow=1) 
    #constant variables 
    rowLength <- nrow(dataTrain) 
    #loop the gradient descent 
    for(iteration in 1:maxIter){ 
    error <- (inputData %*% t(theta)) - outputData 
    for(column in 1:length(theta)){ 
     term <- error * inputData[,column] 
     #calculate gradient 
     gradient <- sum(term)/rowLength 
     updateRule[1,column] <- updateRule[1,column] + (alpha*gradient) 
     temporaryTheta[1,column] = theta[1,column] - updateRule[1,column] 
    } 
    updateRule <- matrix(0, ncol=length(theta), nrow=1) 
    #update all theta in the current iteration 
    theta <- temporaryTheta 
    } 
    result <- theta 
    return(result) 
} 

GD(data, alpha = 0.05, maxIter = 500, seed = NULL) 
#   [,1] [,2]  [,3] 
#[1,] 340412.7 110630 -6648.375 

########## R built-in lm function ########## 
housing <- read.csv("https://raw.githubusercontent.com/ethen8181/machine-learning/master/linear_regression/housing.txt", 
       header = TRUE, 
       sep = ",") 

normalized <- apply(housing[ , -3 ], 2, scale) 
normalized_data <- data.frame(cbind(normalized, price = housing$price)) 
model <- lm(price ~ ., data = normalized_data) 
model 

#Call: 
# lm(formula = price ~ ., data = normalized_data) 
# 
#Coefficients: 
# (Intercept)   area  bedrooms 
#  340413  110631  -6649 

回答

1

我認爲你是對的。 我認爲updateRule這條線就像一個動力。即,當前迭代中的變化方向在下一次迭代中保留(部分地)。 但是,您應該不添加新的漸變,但是您的updateRule會衰減,以便在迭代過程中任何錯誤的方向都會被沖走。

下面,我只改變了你的函數的一行。更新updateRule時,我乘以0.2。零和一之間的任何值都可以工作(但必須嚴格小於一)。

此外,我增加了迭代次數。在我的系統,我得到:

##    [,1]  [,2] 
## [1,] -3.895781 1.193034 

這是非常相似的結果lm結果。

GD <- function(dataTrain, alpha=0.1, maxIter=10, seed=NULL){ 
    #convert data.frame dataSet in matrix 
    dataTrain <- matrix(unlist(dataTrain), ncol=ncol(dataTrain), byrow=FALSE) 
    #shuffle data train 
    set.seed(seed) 
    dataTrain <- dataTrain[sample(nrow(dataTrain)), ] 
    set.seed(NULL) 
    #initialize theta 
    theta <- getTheta(ncol(dataTrain), seed=seed) 
    #bind 1 column to dataTrain 
    dataTrain <- cbind(1, dataTrain) 
    #parse dataTrain into input and output 
    inputData <- dataTrain[,1:ncol(dataTrain)-1] 
    outputData <- dataTrain[,ncol(dataTrain)] 
    #temporary variables 
    temporaryTheta <- matrix(ncol=length(theta), nrow=1) 
    updateRule <- matrix(0, ncol=length(theta), nrow=1) 
    #constant variables 
    rowLength <- nrow(dataTrain) 
    #loop the gradient descent 
    for(iteration in 1:maxIter){ 
    error <- (inputData %*% t(theta)) - outputData 
    for(column in 1:length(theta)){ 
     term <- error * inputData[,column] 
     #calculate gradient 
     gradient <- sum(term)/rowLength 
     #updateRule[1,column] <- updateRule[1,column] + (alpha*gradient) 
     updateRule[1,column] <- 0.2*updateRule[1,column] + (alpha*gradient) 
     temporaryTheta[1,column] = theta[1,column] - updateRule[1,column] 
    } 
    #update all theta in the current iteration 
    theta <- temporaryTheta 
    } 
    result <- theta 
    return(result) 
} 

getTheta <- function(columnLength, minTheta=0, maxTheta=1, seed=NULL){ 
    #create static random 
    set.seed(seed) 
    #random a value 
    thetaList <- runif(columnLength, min=minTheta, max=maxTheta) 
    #clear static random 
    set.seed(seed) 
    #transform into matrix 
    result <- matrix(unlist(thetaList), ncol=columnLength, nrow=1, byrow=FALSE) 
    return(result) 
} 

#install.packages("gradDescent") 
library(gradDescent) 

URL_subs <-"https://raw.githubusercontent.com/ahawker/machine-learning-coursera/master/ex1/ex1data1.txt" 
data <- read.table(URL_subs, header=FALSE, sep=",") 

########## gradDescent Function ########## 
GD(data, alpha = 0.01, maxIter = 15000, seed = 1) 


########## R bulit-in function ########## 
model <- lm(data$V2~ ., data = data) 
model 
+0

是的,謝謝!這不是我的包或功能。我還做了一些小改動,比如重置更新規則,現在我得到了與lm包類似的結果。我將編輯我的文章並顯示結果。 –

相關問題