0
我想爲我的樸素貝葉斯分類器做一個k-折驗證使用sklearnK-折交叉驗證的樸素貝葉斯
train = csv_io.read_data("../Data/train.csv")
target = np.array([x[0] for x in train])
train = np.array([x[1:] for x in train])
#In this case we'll use a random forest, but this could be any classifier
cfr = RandomForestClassifier(n_estimators=100)
#Simple K-Fold cross validation. 10 folds.
cv = cross_validation.KFold(len(train), k=10, indices=False)
#iterate through the training and test cross validation segments and
#run the classifier on each one, aggregating the results into a list
results = []
for traincv, testcv in cv:
probas = cfr.fit(train[traincv], target[traincv]).predict_proba(train[testcv])
results.append(myEvaluationFunc(target[testcv], [x[1] for x in probas]))
#print out the mean of the cross-validated results
print "Results: " + str(np.array(results).mean())
我發現了一個代碼從這個網站,https://www.kaggle.com/wiki/GettingStartedWithPythonForDataScience/history/969。在這個例子中,分類器是RandomForestClassifier,我想使用我自己的樸素貝葉斯分類器,但我不太確定這條線上的擬合方法做了什麼probas = cfr.fit(train [traincv],target [traincv])。predict_proba (火車[testcv])