2017-07-26 107 views
0

我在R中使用h2o.randomforest在2個組上構建一個分類器,說組「A」&組「B」。作爲一個例子,我生成的樣本數據集隨機如下所示,並轉換成一個h2oframe:標籤在h2o.randomforest中扮演角色嗎?

a <- sample(0:1,10000,replace=T) 
    b <- sample(0:1,10000,replace=T) 
    c <- sample(1:10,10000,replace=T) 
    d <- sample(0:1,10000,replace=T) 
    e <- sample(0:1,10000,replace=T) 
    f <- sample(0:1,10000,replace=T) 

基本上,它們將被因式分解和所有有2層,除c,其有10個levels.The第一5000行被分配標籤爲「A」,其餘分配標籤「B」。另外,我還有另一個名爲nlabel的欄,其中前5000行是「B」,其餘的是「A」。

這是第10行的最後10行我的數據集的:

  a b c d e f label nlabel 
    1  0 0 5 0 1 0  A  B 
    2  0 1 5 1 1 1  A  B 
    3  0 0 6 0 0 1  A  B 
    4  0 0 8 0 0 1  A  B 
    5  1 1 1 1 1 1  A  B 
    6  1 1 6 1 0 1  A  B 
    7  1 0 3 1 1 1  A  B 
    8  1 1 9 1 0 1  A  B 
    9  1 0 8 1 0 1  A  B 
    10 0 0 1 0 1 1  A  B 
    ............. 
    9991 1 1 3 0 0 1  B  A 
    9992 0 0 7 1 0 0  B  A 
    9993 1 0 9 0 1 1  B  A 
    9994 0 1 3 0 0 0  B  A 
    9995 1 1 8 0 1 0  B  A 
    9996 0 1 8 0 1 0  B  A 
    9997 1 1 9 0 1 0  B  A 
    9998 0 0 5 1 0 1  B  A 
    9999 0 1 9 1 1 0  B  A 
    10000 0 1 10 1 0 1  B  A 

因爲我隨機生成的數據集,我沒有,除了我能得到所有的好分類器(或者我可以成爲世界上最幸運的人)。我排除了一些更像隨機猜測的東西。下面是一個結果我得到由R中使用「隨機森林」包:

> rf <- randomForest(label ~ a + b + c + e + f, 
    +       data = test, 
           ntree = 100) 
    > rf 

     Call: 
     randomForest(formula = label ~ a + b + c + e + f, data = test,  ntree = 100) 
         Type of random forest: classification 
          Number of trees: 100 
     No. of variables tried at each split: 2 

       OOB estimate of error rate: 50.17% 
     Confusion matrix: 
      A B class.error 
     A 2507 2493  0.4986 
     B 2524 2476  0.5048 

但是,通過使用與h2o.randomforest相同的數據集,我得到不同的結果。下面是我使用的代碼和我得到的結果是:

 > TEST <- as.h2o(test) 
     > rfh2o <- h2o.randomForest(y = "label", 
            x = c("a","b", 
             "c","d", 
             "e","f"), 
            training_frame = TEST, 
            ntrees = 100) 
    > rfh2o 
    Model Details: 
    ============== 

    H2OBinomialModel: drf 
    Model ID: DRF_model_R_1501015614001_1029 
    Model Summary: 
     number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves 
    1    100      100    366582   7  14 11.33000   1 
     max_leaves mean_leaves 
    1  319 286.52000 


    H2OBinomialMetrics: drf 
    ** Reported on training data. ** 
    ** Metrics reported on Out-Of-Bag training samples ** 

    MSE: 0.2574374 
    RMSE: 0.5073829 
    LogLoss: 0.7086906 
    Mean Per-Class Error: 0.5 
    AUC: 0.4943865 
    Gini: -0.01122696 

    Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold: 
      A  B Error   Rate 
    A  0 5000 1.000000 =5000/5000 
    B  0 5000 0.000000  =0/5000 
    Totals 0 10000 0.500000 =5000/10000 

    Maximum Metrics: Maximum metrics at their respective thresholds 
          metric threshold value idx 
    1      max f1 0.231771 0.666667 399 
    2      max f2 0.231771 0.833333 399 
    3     max f0point5 0.231771 0.555556 399 
    4     max accuracy 0.459704 0.506800 251 
    5    max precision 0.723654 0.593750 10 
    6     max recall 0.231771 1.000000 399 
    7    max specificity 0.785389 0.999800 0 
    8    max absolute_mcc 0.288276 0.051057 389 
    9 max min_per_class_accuracy 0.500860 0.488000 200 
    10 max mean_per_class_accuracy 0.459704 0.506800 251 

Based on the result above, the confusion matrix is different from what I got from "randomForest" package. 

另外,如果我用"nlabel"代替"label"與h2o.randomforest,我仍然有上預測A的高錯誤率。但在目前的模型中,A與上一個模型中的B相同。這裏是代碼和我得到的結果:

> rfh2o_n <- h2o.randomForest(y = "nlabel", 
+       x = c("a","b", 
+         "c","d", 
+         "e","f"), 
+       training_frame = TEST, 
+       ntrees = 100) 

> rfh2o_n 
Model Details: 
============== 

H2OBinomialModel: drf 
Model ID: DRF_model_R_1501015614001_1113 
Model Summary: 
    number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves 
1    100      100    365232  11  14 11.18000   1 
    max_leaves mean_leaves 
1  319 285.42000 


H2OBinomialMetrics: drf 
** Reported on training data. ** 
** Metrics reported on Out-Of-Bag training samples ** 

MSE: 0.2575674 
RMSE: 0.507511 
LogLoss: 0.7089465 
Mean Per-Class Error: 0.5 
AUC: 0.4923496 
Gini: -0.01530088 

Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold: 
     A  B Error   Rate 
A  0 5000 1.000000 =5000/5000 
B  0 5000 0.000000  =0/5000 
Totals 0 10000 0.500000 =5000/10000 

Maximum Metrics: Maximum metrics at their respective thresholds 
         metric threshold value idx 
1      max f1 0.214495 0.666667 399 
2      max f2 0.214495 0.833333 399 
3     max f0point5 0.214495 0.555556 399 
4     max accuracy 0.617230 0.506600 74 
5    max precision 0.621806 0.541833 70 
6     max recall 0.214495 1.000000 399 
7    max specificity 0.749866 0.999800 0 
8    max absolute_mcc 0.733630 0.042465 6 
9 max min_per_class_accuracy 0.499186 0.486400 201 
10 max mean_per_class_accuracy 0.617230 0.506600 74 

這樣的結果讓我懷疑標籤是否在h2o.randomforest中發揮作用。 我不經常使用h2o,但上面的結果讓我很困惑。這僅僅是由於可能性,還是我犯了一些愚蠢的錯誤,或者是其他的錯誤?

回答

0

我認爲這是因爲,由於數據是完全隨機的,因此H2O默認使用閾值進行最終預測的max-f1統計量不會產生有用的值。

如果您強制閾值爲0.5,就像您在下面看到的那樣,您將會得到預期的行爲。另外,如果您打開H2O Flow並查看訓練模型的ROC曲線,則它非常可怕並且幾乎是一條直線(如您所期望的那樣)。

library(data.table) 
library(h2o) 

a <- sample(0:1,10000,replace=T) 
b <- sample(0:1,10000,replace=T) 
c <- sample(1:10,10000,replace=T) 
d <- sample(0:1,10000,replace=T) 
e <- sample(0:1,10000,replace=T) 
f <- sample(0:1,10000,replace=T) 
df = data.frame(a, b, c, d, e, f) 
dt = as.data.table(df) 
dt[1:5000, label := "A"] 
dt[5001:10000, label := "B"] 
dt$label = as.factor(dt$label) 
dt 

h2o.init() 
h2o_dt <- as.h2o(dt) 
model = h2o.randomForest(y = "label", 
         x = c("a", "b", "c", "d", "e", "f"), 
         training_frame = h2o_dt, 
         ntrees = 10, 
         model_id = "model") 
model 
h2o_preds = h2o.predict(model, h2o_dt) 
preds = as.data.table(h2o_preds) 
preds[, prediction := A > 0.5] 
table(preds$prediction) 

以及最終輸出是:

FALSE TRUE 
5085 4915 

您可以重新運行了一堆的時間,看到的值反彈,隨機,但各地各5000分組。

相關問題