似乎棘手的事情正是在預處理步驟nltk
和tm
之間是相同的,所以我覺得最好的方法是使用rpy2
在R中運行預處理並將結果拖入python中:
import rpy2.robjects as ro
preproc = [x[0] for x in ro.r('''
tweets = read.csv("tweets.csv", stringsAsFactors=FALSE)
library(tm)
library(SnowballC)
corpus = Corpus(VectorSource(tweets$Tweet))
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c("apple", stopwords("english")))
corpus = tm_map(corpus, stemDocument)''')]
然後,您可以將其加載到scikit-learn
- 你需要的唯一的事情,把事情給CountVectorizer
和DocumentTermMatrix
之間的匹配是去除長度不方面比3:
from sklearn.feature_extraction.text import CountVectorizer
def mytokenizer(x):
return [y for y in x.split() if len(y) > 2]
# Full document-term matrix
cv = CountVectorizer(tokenizer=mytokenizer)
X = cv.fit_transform(preproc)
X
# <1181x3289 sparse matrix of type '<type 'numpy.int64'>'
# with 8980 stored elements in Compressed Sparse Column format>
# Sparse terms removed
cv2 = CountVectorizer(tokenizer=mytokenizer, min_df=0.005)
X2 = cv2.fit_transform(preproc)
X2
# <1181x309 sparse matrix of type '<type 'numpy.int64'>'
# with 4669 stored elements in Compressed Sparse Column format>
讓我們驗證這一點與r匹配:
tweets = read.csv("tweets.csv", stringsAsFactors=FALSE)
library(tm)
library(SnowballC)
corpus = Corpus(VectorSource(tweets$Tweet))
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c("apple", stopwords("english")))
corpus = tm_map(corpus, stemDocument)
dtm = DocumentTermMatrix(corpus)
dtm
# A document-term matrix (1181 documents, 3289 terms)
#
# Non-/sparse entries: 8980/3875329
# Sparsity : 100%
# Maximal term length: 115
# Weighting : term frequency (tf)
sparse = removeSparseTerms(dtm, 0.995)
sparse
# A document-term matrix (1181 documents, 309 terms)
#
# Non-/sparse entries: 4669/360260
# Sparsity : 99%
# Maximal term length: 20
# Weighting : term frequency (tf)
正如你所看到的,存儲的元素和術語的數量恰好這兩種方法之間現在匹配。
自然語言處理中使用NLTK在Python中。 – ramcdougal
@ramcdougal:我收集了很多,但我正在努力處理文檔。 – orome
看看這個[教程](http://nbviewer.ipython.org/urls/gist.githubusercontent.com/kljensen/9662971/raw/4628ed3a1d27b84a3c56e46d87146c1d08267893/NewHaven.io+NLP+tutorial.ipynb?create=1)。它涵蓋了標記化,停用詞和詞幹。 – ramcdougal