2017-10-13 158 views
0

我目前正在關注以下link的幻燈片。我在幻燈片121/128上,我想知道如何複製AUC。作者沒有解釋如何這樣做(幻燈片124中也一樣)。其次,在幻燈片125上生成以下代碼;在R中繪製xgboost模型的AUC

bestRound = which.max(as.matrix(cv.res)[,3]-as.matrix(cv.res)[,4]) 
bestRound 

我收到以下錯誤;

錯誤as.matrix(cv.res),2]:下標出界

以下代碼中的數據可以從here被下載和我已經產生下面的代碼供你參考。

問題:如何生成作爲作者的AUC以及爲什麼下標越界?

-----代碼------

# Kaggle Winning Solutions 

train <- read.csv('train.csv', header = TRUE) 
test <- read.csv('test.csv', header = TRUE) 
y <- train[, 1] 
train <- as.matrix(train[, -1]) 
test <- as.matrix(test) 

train[1, ] 

#We want to determin who is more influencial than the other 

new.train <- cbind(train[, 12:22], train[, 1:11]) 
train = rbind(train, new.train) 
y <- c(y, 1 - y) 

x <- rbind(train, test) 

(dat[,i]+lambda)/(dat[,j]+lambda) 

A.follow.ratio = calcRatio(x,1,2) 
A.mention.ratio = calcRatio(x,4,6) 
A.retweet.ratio = calcRatio(x,5,7) 
A.follow.post = calcRatio(x,1,8) 
A.mention.post = calcRatio(x,4,8) 
A.retweet.post = calcRatio(x,5,8) 
B.follow.ratio = calcRatio(x,12,13) 
B.mention.ratio = calcRatio(x,15,17) 
B.retweet.ratio = calcRatio(x,16,18) 
B.follow.post = calcRatio(x,12,19) 
B.mention.post = calcRatio(x,15,19) 
B.retweet.post = calcRatio(x,16,19) 

x = cbind(x[,1:11], 
      A.follow.ratio,A.mention.ratio,A.retweet.ratio, 
      A.follow.post,A.mention.post,A.retweet.post, 
      x[,12:22], 
      B.follow.ratio,B.mention.ratio,B.retweet.ratio, 
      B.follow.post,B.mention.post,B.retweet.post) 

AB.diff = x[,1:17]-x[,18:34] 
x = cbind(x,AB.diff) 
train = x[1:nrow(train),] 
test = x[-(1:nrow(train)),] 

set.seed(1024) 
cv.res <- xgb.cv(data = train, nfold = 3, label = y, nrounds = 100, verbose = FALSE, 
       objective = 'binary:logistic', eval_metric = 'auc') 

情節AUC圖形這裏

set.seed(1024) 
cv.res = xgb.cv(data = train, nfold = 3, label = y, nrounds = 3000, 
       objective='binary:logistic', eval_metric = 'auc', 
       eta = 0.005, gamma = 1,lambda = 3, nthread = 8, 
       max_depth = 4, min_child_weight = 1, verbose = F, 
       subsample = 0.8,colsample_bytree = 0.8) 

這裏是我的代碼遇到

突破
#bestRound: - subscript out of bounds 

bestRound <- which.max(as.matrix(cv.res)[,3]-as.matrix(cv.res)[,4]) 
bestRound 
cv.res 

cv.res[bestRound,] 

set.seed(1024) bst <- xgboost(data = train, label = y, nrounds = 3000, 
          objective='binary:logistic', eval_metric = 'auc', 
          eta = 0.005, gamma = 1,lambda = 3, nthread = 8, 
          max_depth = 4, min_child_weight = 1, 
          subsample = 0.8,colsample_bytree = 0.8) 
preds <- predict(bst,test,ntreelimit = bestRound) 

result <- data.frame(Id = 1:nrow(test), Choice = preds) 
write.csv(result,'submission.csv',quote=FALSE,row.names=FALSE) 

回答

1

代碼h的很多部分AVE沒有什麼意義我,但這裏是構建模型所提供的數據的一個小例子:

數據:

train <- read.csv('train.csv', header = TRUE) 
y <- train[, 1] 
train <- as.matrix(train[, -1]) 

型號:

library(xgboost) 
cv.res <- xgb.cv(data = train, nfold = 3, label = y, nrounds = 100, verbose = FALSE, 
       objective = 'binary:logistic', eval_metric = 'auc', prediction = T) 

爲了得到交叉驗證的預測必須指定prediction = T當致電xgb.cv時。

爲了獲得最好的迭代:

it = which.max(cv.res$evaluation_log$test_auc_mean) 
best.iter = cv.res$evaluation_log$iter[it] 

繪製ROC曲線上的交叉驗證的結果:

library(pROC) 
plot(pROC::roc(response = y, 
       predictor = cv.res$pred, 
       levels=c(0, 1)), 
    lwd=1.5) 

enter image description here

爲了得到一個混淆矩陣(假設0.5的概率爲閾值):

library(caret) 
confusionMatrix(ifelse(cv.res$pred <= 0.5, 0, 1), y) 
#output 
      Reference 
Prediction 0 1 
     0 2020 638 
     1 678 2164 

       Accuracy : 0.7607   
       95% CI : (0.7492, 0.772) 
    No Information Rate : 0.5095   
    P-Value [Acc > NIR] : <2e-16   

        Kappa : 0.5212   
Mcnemar's Test P-Value : 0.2823   

      Sensitivity : 0.7487   
      Specificity : 0.7723   
     Pos Pred Value : 0.7600   
     Neg Pred Value : 0.7614   
      Prevalence : 0.4905   
     Detection Rate : 0.3673   
    Detection Prevalence : 0.4833   
     Balanced Accuracy : 0.7605   

     'Positive' Class : 0 

話雖這麼說,一個目標應該是調整超參數與交叉驗證,如ETA,γ,λ,子樣本,colsample_bytree,colsample_bylevel等

最簡單的方法是建立在您上使用expand.grid網格搜索作爲自定義函數的一部分,使用xgb.cv的網格上的超參數的所有組合並使用lapply)。如果你需要更多的細節,請評論。

+0

謝謝你的AUC陰謀工作。 「爲了獲得交叉驗證預測,在調用xgb.cv時必須指定prediction = T」是我出錯的地方。 – user113156

+0

我想嘗試複製的另一點是在幻燈片121/128中,作者說:「我們可以看到AUC在訓練和測試集上的趨勢。」我怎樣才能在測試集上進行復制?以及在測試集上覆制它的目的是什麼? – user113156

+0

@ user113156還有很多要訓練xgboost模型,然後這。人們喜歡他們做事的方式。通常在交叉驗證期間執行超參數,數據轉換,上/下采樣,變量選擇,概率閾值優化,成本函數選擇。通常不只是重複一次CV,而是例如5次重複3-4次CV。當你拿起所有這些東西的最佳組合時,你將訓練數據並在測試集上進行驗證。這一切都是爲了避免過度裝配。 – missuse