2014-06-09 126 views
14

我試圖瓶坯使用scikit-learn遞歸特徵消除和隨機森林分類,以OOB ROC爲得分在遞歸過程中創建的每個子集的方法。遞歸特徵消除

然而,當我嘗試使用RFECV方法,我得到一個錯誤說AttributeError: 'RandomForestClassifier' object has no attribute 'coef_'

隨機森林不必係數本身,而是他們確實有對基尼得分排名。所以,我想知道如何解決這個問題。

請注意,我想要使用一種方法,可以明確地告訴我在最佳分組中選擇了我的pandas DataFrame中的哪些功能,因爲我正在使用遞歸功能選擇來儘量減少要輸入到最終分類器。

下面是一些示例代碼:

from sklearn import datasets 
import pandas as pd 
from pandas import Series 
from sklearn.ensemble import RandomForestClassifier 
from sklearn.feature_selection import RFECV 

iris = datasets.load_iris() 
x=pd.DataFrame(iris.data, columns=['var1','var2','var3', 'var4']) 
y=pd.Series(iris.target, name='target') 
rf = RandomForestClassifier(n_estimators=500, min_samples_leaf=5, n_jobs=-1) 
rfecv = RFECV(estimator=rf, step=1, cv=10, scoring='ROC', verbose=2) 
selector=rfecv.fit(x, y) 

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 336, in fit 
    ranking_ = rfe.fit(X_train, y_train).ranking_ 
    File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/feature_selection/rfe.py", line 148, in fit 
    if estimator.coef_.ndim > 1: 
AttributeError: 'RandomForestClassifier' object has no attribute 'coef_' 
+1

另一種方法是在調用'predict'或'predict_proba'後使用'feature_importances_'屬性,這會返回傳遞順序的百分比數組。看到[在線示例](http://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_regression.html) – EdChum

+0

看到了;不過,我想知道是否有什麼東西可以讓我進行10倍驗證並確定最佳功能子集。 – Bryan

+0

我不得不做類似的事情,但是我通過對功能重要性進行排序,然後一次修改1,3或5個功能來手動完成。我沒有使用你的方法,我不得不說,所以我不知道它是否可以完成。 – EdChum

回答

3

這是我皮棉起來。這是一個非常簡單的解決方案,並且依靠自定義準確性度量(稱爲weightedAccuracy),因爲我正在對高度不平衡的數據集進行分類。但是,如果需要,它應該很容易變得更具可擴展性。

from sklearn import datasets 
import pandas 
from sklearn.ensemble import RandomForestClassifier 
from sklearn import cross_validation 
from sklearn.metrics import confusion_matrix 


def get_enhanced_confusion_matrix(actuals, predictions, labels): 
    """"enhances confusion_matrix by adding sensivity and specificity metrics""" 
    cm = confusion_matrix(actuals, predictions, labels = labels) 
    sensitivity = float(cm[1][1])/float(cm[1][0]+cm[1][1]) 
    specificity = float(cm[0][0])/float(cm[0][0]+cm[0][1]) 
    weightedAccuracy = (sensitivity * 0.9) + (specificity * 0.1) 
    return cm, sensitivity, specificity, weightedAccuracy 

iris = datasets.load_iris() 
x=pandas.DataFrame(iris.data, columns=['var1','var2','var3', 'var4']) 
y=pandas.Series(iris.target, name='target') 

response, _ = pandas.factorize(y) 

xTrain, xTest, yTrain, yTest = cross_validation.train_test_split(x, response, test_size = .25, random_state = 36583) 
print "building the first forest" 
rf = RandomForestClassifier(n_estimators = 500, min_samples_split = 2, n_jobs = -1, verbose = 1) 
rf.fit(xTrain, yTrain) 
importances = pandas.DataFrame({'name':x.columns,'imp':rf.feature_importances_ 
           }).sort(['imp'], ascending = False).reset_index(drop = True) 

cm, sensitivity, specificity, weightedAccuracy = get_enhanced_confusion_matrix(yTest, rf.predict(xTest), [0,1]) 
numFeatures = len(x.columns) 

rfeMatrix = pandas.DataFrame({'numFeatures':[numFeatures], 
           'weightedAccuracy':[weightedAccuracy], 
           'sensitivity':[sensitivity], 
           'specificity':[specificity]}) 

print "running RFE on %d features"%numFeatures 

for i in range(1,numFeatures,1): 
    varsUsed = importances['name'][0:i] 
    print "now using %d of %s features"%(len(varsUsed), numFeatures) 
    xTrain, xTest, yTrain, yTest = cross_validation.train_test_split(x[varsUsed], response, test_size = .25) 
    rf = RandomForestClassifier(n_estimators = 500, min_samples_split = 2, 
           n_jobs = -1, verbose = 1) 
    rf.fit(xTrain, yTrain) 
    cm, sensitivity, specificity, weightedAccuracy = get_enhanced_confusion_matrix(yTest, rf.predict(xTest), [0,1]) 
    print("\n"+str(cm)) 
    print('the sensitivity is %d percent'%(sensitivity * 100)) 
    print('the specificity is %d percent'%(specificity * 100)) 
    print('the weighted accuracy is %d percent'%(weightedAccuracy * 100)) 
    rfeMatrix = rfeMatrix.append(
           pandas.DataFrame({'numFeatures':[len(varsUsed)], 
           'weightedAccuracy':[weightedAccuracy], 
           'sensitivity':[sensitivity], 
           'specificity':[specificity]}), ignore_index = True)  
print("\n"+str(rfeMatrix))  
maxAccuracy = rfeMatrix.weightedAccuracy.max() 
maxAccuracyFeatures = min(rfeMatrix.numFeatures[rfeMatrix.weightedAccuracy == maxAccuracy]) 
featuresUsed = importances['name'][0:maxAccuracyFeatures].tolist() 

print "the final features used are %s"%featuresUsed 
6

這是我的代碼,我已經收拾了一陣,使其與您的任務:

features_to_use = fea_cols # this is a list of features 
# empty dataframe 
trim_5_df = DataFrame(columns=features_to_use) 
run=1 
# this will remove the 5 worst features determined by their feature importance computed by the RF classifier 
while len(features_to_use)>6: 
    print('number of features:%d' % (len(features_to_use))) 
    # build the classifier 
    clf = RandomForestClassifier(n_estimators=1000, random_state=0, n_jobs=-1) 
    # train the classifier 
    clf.fit(train[features_to_use], train['OpenStatusMod'].values) 
    print('classifier score: %f\n' % clf.score(train[features_to_use], df['OpenStatusMod'].values)) 
    # predict the class and print the classification report, f1 micro, f1 macro score 
    pred = clf.predict(test[features_to_use]) 
    print(classification_report(test['OpenStatusMod'].values, pred, target_names=status_labels)) 
    print('micro score: ') 
    print(metrics.precision_recall_fscore_support(test['OpenStatusMod'].values, pred, average='micro')) 
    print('macro score:\n') 
    print(metrics.precision_recall_fscore_support(test['OpenStatusMod'].values, pred, average='macro')) 
    # predict the class probabilities 
    probs = clf.predict_proba(test[features_to_use]) 
    # rescale the priors 
    new_probs = kf.cap_and_update_priors(priors, probs, private_priors, 0.001) 
    # calculate logloss with the rescaled probabilities 
    print('log loss: %f\n' % log_loss(test['OpenStatusMod'].values, new_probs)) 
    row={} 
    if hasattr(clf, "feature_importances_"): 
     # sort the features by importance 
     sorted_idx = np.argsort(clf.feature_importances_) 
     # reverse the order so it is descending 
     sorted_idx = sorted_idx[::-1] 
     # add to dataframe 
     row['num_features'] = len(features_to_use) 
     row['features_used'] = ','.join(features_to_use) 
     # trim the worst 5 
     sorted_idx = sorted_idx[: -5] 
     # swap the features list with the trimmed features 
     temp = features_to_use 
     features_to_use=[] 
     for feat in sorted_idx: 
      features_to_use.append(temp[feat]) 
     # add the logloss performance 
     row['logloss']=[log_loss(test['OpenStatusMod'].values, new_probs)] 
    print('') 
    # add the row to the dataframe 
    trim_5_df = trim_5_df.append(DataFrame(row)) 
run +=1 

所以我在做什麼這裏有我想要訓練的功能列表,然後使用功能重要性進行預測,然後修剪最差的5並重復。在每次運行期間,我會添加一行記錄預測性能,以便稍後進行一些分析。

原代碼要大得多我有不同的分類和數據集,我分析,但我希望你從上面的圖片。我注意到的事情是隨機森林的功能,我每次運行刪除數量影響了性能通過一次1,3和5功能,所以微調導致了一組不同的最佳功能。

我發現,使用GradientBoostingClassifer是更可預測和可重複的意義上,最後一組的最佳特性同意我是否修剪每次1個功能或3或5

我希望我不是在教你在這裏班門弄斧,你可能知道的比我多,但我的方式來消融anlaysis是使用快速分類,以獲得最佳的功能集的一個粗略的想法,然後使用性能更好的分類,然後開始超參數調整,一旦我感覺到最好的參數是什麼,就再次做粗顆粒的同時,再細粒。

17

這裏是我做了什麼,以適應RandomForestClassifier與RFECV工作:

class RandomForestClassifierWithCoef(RandomForestClassifier): 
    def fit(self, *args, **kwargs): 
     super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs) 
     self.coef_ = self.feature_importances_ 

就使用這個類的伎倆,如果你用「精度」或「F1」的分數。對於'roc_auc',RFECV抱怨不支持多類格式。使用下面的代碼將其更改爲兩級分類,'roc_auc'得分可用。 (使用Python 3.4.1和scikit-learn 0.15。1)

y=(pd.Series(iris.target, name='target')==2).astype(int) 

堵到你的代碼:

from sklearn import datasets 
import pandas as pd 
from pandas import Series 
from sklearn.ensemble import RandomForestClassifier 
from sklearn.feature_selection import RFECV 

class RandomForestClassifierWithCoef(RandomForestClassifier): 
    def fit(self, *args, **kwargs): 
     super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs) 
     self.coef_ = self.feature_importances_ 

iris = datasets.load_iris() 
x=pd.DataFrame(iris.data, columns=['var1','var2','var3', 'var4']) 
y=(pd.Series(iris.target, name='target')==2).astype(int) 
rf = RandomForestClassifierWithCoef(n_estimators=500, min_samples_leaf=5, n_jobs=-1) 
rfecv = RFECV(estimator=rf, step=1, cv=2, scoring='roc_auc', verbose=2) 
selector=rfecv.fit(x, y)