2013-04-09 36 views
11

我想創建一個有NLTK和熊貓的期限文檔矩陣。 我寫了下面的功能:有效期限文檔矩陣與NLTK

def fnDTM_Corpus(xCorpus): 
    import pandas as pd 
    '''to create a Term Document Matrix from a NLTK Corpus''' 
    fd_list = [] 
    for x in range(0, len(xCorpus.fileids())): 
     fd_list.append(nltk.FreqDist(xCorpus.words(xCorpus.fileids()[x]))) 
    DTM = pd.DataFrame(fd_list, index = xCorpus.fileids()) 
    DTM.fillna(0,inplace = True) 
    return DTM.T 

運行它

import nltk 
from nltk.corpus import PlaintextCorpusReader 
corpus_root = 'C:/Data/' 

newcorpus = PlaintextCorpusReader(corpus_root, '.*') 

x = fnDTM_Corpus(newcorpus) 

它非常適用於語料庫一些小文件,但給了我一個的MemoryError當我嘗試用語料庫運行4,000個文件(每個大約2kb)。

我錯過了什麼嗎?

我正在使用一個32位的Python。 (上午在Windows 7,64位操作系統,核心四核CPU,8 GB RAM)。我真的需要使用64位的這種大小的語料庫嗎?

+1

您是否嘗試過'gensim'或類似的庫,已經優化了他們的tf-idf代碼? http://radimrehurek.com/gensim/ – alvas 2013-04-09 14:43:07

+0

4000個文件是一個很小的語料庫。您需要[稀疏](https://en.wikipedia.org/wiki/Sparse_matrix)表示法。熊貓有Gensim和scikit學習的。 – 2013-04-09 15:03:53

+0

我以爲'pd.get_dummies(df_column)'可以完成這項工作。也許我錯過了關於文檔術語矩陣 – 2015-11-06 05:00:04

回答

19

感謝Radim和Larsmans。 我的目標是擁有一個類似於你在R tm中獲得的DTM。 我決定使用scikit-learn,部分靈感來源於this blog entry。這是我提出的代碼。

我在這裏發佈它,希望別人會發現它有用。

import pandas as pd 
from sklearn.feature_extraction.text import CountVectorizer 

def fn_tdm_df(docs, xColNames = None, **kwargs): 
    ''' create a term document matrix as pandas DataFrame 
    with **kwargs you can pass arguments of CountVectorizer 
    if xColNames is given the dataframe gets columns Names''' 

    #initialize the vectorizer 
    vectorizer = CountVectorizer(**kwargs) 
    x1 = vectorizer.fit_transform(docs) 
    #create dataFrame 
    df = pd.DataFrame(x1.toarray().transpose(), index = vectorizer.get_feature_names()) 
    if xColNames is not None: 
     df.columns = xColNames 

    return df 

使用它的文本列表上的目錄中

DIR = 'C:/Data/' 

def fn_CorpusFromDIR(xDIR): 
    ''' functions to create corpus from a Directories 
    Input: Directory 
    Output: A dictionary with 
      Names of files ['ColNames'] 
      the text in corpus ['docs']''' 
    import os 
    Res = dict(docs = [open(os.path.join(xDIR,f)).read() for f in os.listdir(xDIR)], 
       ColNames = map(lambda x: 'P_' + x[0:6], os.listdir(xDIR))) 
    return Res 

創建數據框

d1 = fn_tdm_df(docs = fn_CorpusFromDIR(DIR)['docs'], 
      xColNames = fn_CorpusFromDIR(DIR)['ColNames'], 
      stop_words=None, charset_error = 'replace') 
22

我知道OP想創造在NLTK一個TDM,但textmining包(pip install textmining)使其變得簡單:

import textmining 

def termdocumentmatrix_example(): 
    # Create some very short sample documents 
    doc1 = 'John and Bob are brothers.' 
    doc2 = 'John went to the store. The store was closed.' 
    doc3 = 'Bob went to the store too.' 
    # Initialize class to create term-document matrix 
    tdm = textmining.TermDocumentMatrix() 
    # Add the documents 
    tdm.add_doc(doc1) 
    tdm.add_doc(doc2) 
    tdm.add_doc(doc3) 
    # Write out the matrix to a csv file. Note that setting cutoff=1 means 
    # that words which appear in 1 or more documents will be included in 
    # the output (i.e. every word will appear in the output). The default 
    # for cutoff is 2, since we usually aren't interested in words which 
    # appear in a single document. For this example we want to see all 
    # words however, hence cutoff=1. 
    tdm.write_csv('matrix.csv', cutoff=1) 
    # Instead of writing out the matrix you can also access its rows directly. 
    # Let's print them to the screen. 
    for row in tdm.rows(cutoff=1): 
      print row 

termdocumentmatrix_example() 

輸出:

['and', 'the', 'brothers', 'to', 'are', 'closed', 'bob', 'john', 'was', 'went', 'store', 'too'] 
[1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0] 
[0, 2, 0, 1, 0, 1, 0, 1, 1, 1, 2, 0] 
[0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1] 

可替代地,人們可以使用熊貓和sklearn [source]

import pandas as pd 
from sklearn.feature_extraction.text import CountVectorizer 

docs = ['why hello there', 'omg hello pony', 'she went there? omg'] 
vec = CountVectorizer() 
X = vec.fit_transform(docs) 
df = pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) 
print(df) 

輸出:

hello omg pony she there went why 
0  1 0  0 0  1  0 1 
1  1 1  1 0  0  0 0 
2  0 1  0 1  1  1 0 
+1

我在運行代碼時出錯: import stemmer ImportError:沒有名爲'stemmer'的模塊 我該如何解決它?我已經嘗試過pip install stemmer。 – 2017-02-08 10:03:12

+0

你在使用什麼版本的Python?有可能在textmining包中引入了一個引導程序模塊導入。我剛剛運行'pip install textmining',然後運行2.7.9上面的代碼,並獲得了預期的輸出結果。 – duhaime 2017-02-08 12:34:44

+0

我使用python 3.5,anaconda,windows 10.我運行'pip install textmining'。我複製並運行代碼。 – 2017-02-08 13:33:44

0

使用令牌和數據幀的替代方法

import nltk 
comment #nltk.download() to get toenize 
from urllib import request 
url = "http://www.gutenberg.org/files/2554/2554-0.txt" 
response = request.urlopen(url) 
raw = response.read().decode('utf8') 
type(raw) 

tokens = nltk.word_tokenize(raw) 
type(tokens) 

tokens[1:10] 
['Project', 
'Gutenberg', 
'EBook', 
'of', 
'Crime', 
'and', 
'Punishment', 
',', 
'by'] 

tokens2=pd.DataFrame(tokens) 
tokens2.columns=['Words'] 
tokens2.head() 


Words 
0 The 
1 Project 
2 Gutenberg 
3 EBook 
4 of 

    tokens2.Words.value_counts().head() 
,     16178 
.     9589 
the    7436 
and    6284 
to     5278