我想使用使用n元組特徵的sklearn
分類器。此外,我想進行交叉驗證以找出n-gram的最佳順序。然而,我有點卡住我如何能夠把所有的東西放在一起。用n元組分類
現在,我有以下代碼:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
text = ... # This is the input text. A list of strings
labels = ... # These are the labels of each sentence
# Find the optimal order of the ngrams by cross-validation
scores = pd.Series(index=range(1,6), dtype=float)
folds = KFold(n_splits=3)
for n in range(1,6):
count_vect = CountVectorizer(ngram_range=(n,n), stop_words='english')
X = count_vect.fit_transform(text)
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.33, random_state=42)
clf = MultinomialNB()
score = cross_val_score(clf, X_train, y_train, cv=folds, n_jobs=-1)
scores.loc[n] = np.mean(score)
# Evaluate the classifier using the best order found
order = scores.idxmax()
count_vect = CountVectorizer(ngram_range=(order,order), stop_words='english')
X = count_vect.fit_transform(text)
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.33, random_state=42)
clf = MultinomialNB()
clf = clf.fit(X_train, y_train)
acc = clf.score(X_test, y_test)
print('Accuracy is {}'.format(acc))
不過,我覺得這是錯誤的方式做到這一點,因爲我創造的每一個循環列車測試分裂。
如果做的列車測試預先分割並分別應用到CountVectorizer
兩個部分,除了這些部分具有不同shape
s表示,採用clf.fit
和clf.score
時會引起問題。
我該如何解決這個問題?
編輯:如果我嘗試先建立一個詞彙,我還是要多建幾個詞彙,由於對unigram的詞彙是從二元語法的不同,等
舉個例子:
# unigram vocab
vocab = set()
for sentence in text:
for word in sentence:
if word not in vocab:
vocab.add(word)
len(vocab) # 47291
# bigram vocab
vocab = set()
for sentence in text:
bigrams = nltk.ngrams(sentence, 2)
for bigram in bigrams:
if bigram not in vocab:
vocab.add(bigram)
len(vocab) # 326044
這再一次導致我需要爲每個n-gram大小應用CountVectorizer
的相同問題。
構建的詞彙首先,從訓練集。沒有什麼能夠阻止你把這兩個單詞和bigrams(以及更多)放在同一個字典中。 – alexis