2
這裏例如,對於姬松茸的樣本數據:不同的結果scikit學習wapper
import xgboost as xgb
from sklearn.datasets import load_svmlight_files
X_train, y_train, X_test, y_test = load_svmlight_files(('agaricus.txt.train', 'agaricus.txt.test'))
clf = xgb.XGBClassifier()
param = clf.get_xgb_params()
clf.fit(X_train, y_train)
preds_sk = clf.predict_proba(X_test)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test)
bst = xgb.train(param, dtrain)
preds = bst.predict(dtest)
print preds_sk
print preds
而且結果是:
[[ 9.98860419e-01 1.13956432e-03]
[ 2.97790766e-03 9.97022092e-01]
[ 9.98816252e-01 1.18372787e-03]
...,
[ 1.95205212e-04 9.99804795e-01]
[ 9.98845220e-01 1.15479471e-03]
[ 5.69522381e-04 9.99430478e-01]]
[ 0.21558253 0.7351886 0.21558253 ..., 0.81527805 0.18158565
0.81527805]
爲何結果不同?似乎所有的默認參數值都是相同的。我在這裏並不是說predict_proba返回[prob, 1- prob]
。
xgboost V0.6,scikit學習v0.18.1,蟒蛇2.7.12