我是新來的機器學習和第一次嘗試Sklearn。我有兩個數據框,一個用於訓練邏輯迴歸模型(具有10倍交叉驗證)的數據和另一個用於使用該模型預測類('0,1')的數據。 這裏是我到目前爲止的代碼使用教程我在Sklearn文檔和Web上發現的位:邏輯迴歸sklearn - 火車和應用模型
import pandas as pd
import numpy as np
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import KFold
from sklearn.preprocessing import normalize
from sklearn.preprocessing import scale
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn import metrics
# Import dataframe with training data
df = pd.read_csv('summary_44.csv')
cols = df.columns.drop('num_class') # Data to use (num_class is the column with the classes)
# Import dataframe with data to predict
df_pred = pd.read_csv('new_predictions.csv')
# Scores
df_data = df.ix[:,:-1].values
# Target
df_target = df.ix[:,-1].values
# Values to predict
df_test = df_pred.ix[:,:-1].values
# Scores' names
df_data_names = cols.values
# Scaling
X, X_pred, y = scale(df_data), scale(df_test), df_target
# Define number of folds
kf = KFold(n_splits=10)
kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator
# Logistic regression normalizing variables
LogReg = LogisticRegression()
# 10-fold cross-validation
scores = [LogReg.fit(X[train], y[train]).score(X[test], y[test]) for train, test in kf.split(X)]
print scores
# Predict new
novel = LogReg.predict(X_pred)
這是實現Logistic迴歸正確的方法是什麼? 我知道在交叉驗證後應該使用fit()方法來訓練模型並將其用於預測。然而,由於我在列表理解中調用了fit(),所以我真的不知道我的模型是否「適合」並可用於進行預測。
發佈一些數據。打印出df和df_data – skrubber