2014-01-14 104 views
11

我期待在NLTK Chapter 6的脈絡中做一些分類。這本書似乎跳過了創建類別的一步,我不確定我做錯了什麼。我在這裏有我的腳本與以下響應。我的問題主要來自第一部分 - 基於目錄名稱的類別創建。這裏的一些其他問題已經使用了文件名(即pos_1.txtneg_1.txt),但我更願意創建可以將文件轉儲到的目錄。在NLTK/Python中使用電影評論語料庫進行分類

from nltk.corpus import movie_reviews 

reviews = CategorizedPlaintextCorpusReader('./nltk_data/corpora/movie_reviews', r'(\w+)/*.txt', cat_pattern=r'/(\w+)/.txt') 
reviews.categories() 
['pos', 'neg'] 

documents = [(list(movie_reviews.words(fileid)), category) 
      for category in movie_reviews.categories() 
      for fileid in movie_reviews.fileids(category)] 

all_words=nltk.FreqDist(
    w.lower() 
    for w in movie_reviews.words() 
    if w.lower() not in nltk.corpus.stopwords.words('english') and w.lower() not in string.punctuation) 
word_features = all_words.keys()[:100] 

def document_features(document): 
    document_words = set(document) 
    features = {} 
    for word in word_features: 
     features['contains(%s)' % word] = (word in document_words) 
    return features 
print document_features(movie_reviews.words('pos/11.txt')) 

featuresets = [(document_features(d), c) for (d,c) in documents] 
train_set, test_set = featuresets[100:], featuresets[:100] 
classifier = nltk.NaiveBayesClassifier.train(train_set) 

print nltk.classify.accuracy(classifier, test_set) 
classifier.show_most_informative_features(5) 

這將返回:

File "test.py", line 38, in <module> 
    for w in movie_reviews.words() 

File "/usr/local/lib/python2.6/dist-packages/nltk/corpus/reader/plaintext.py", line 184, in words 
    self, self._resolve(fileids, categories)) 

File "/usr/local/lib/python2.6/dist-packages/nltk/corpus/reader/plaintext.py", line 91, in words 
    in self.abspaths(fileids, True, True)]) 

File "/usr/local/lib/python2.6/dist-packages/nltk/corpus/reader/util.py", line 421, in concat 
    raise ValueError('concat() expects at least one object!') 

ValueError: concat() expects at least one object! 

--------- ------------- UPDATE感謝 爲alvas您詳細的解答!然而,我有兩個問題。

  1. 是否有可能從我正在嘗試做的文件名抓取類別?我希望能夠以與review_pos.txt方法相同的方式進行,只從文件夾名稱而不是文件名中獲取pos
  2. 我跑你的代碼,並在第一for

    train_set =[({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]] test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]

與胡蘿蔔我遇到一個語法錯誤。我是一名初學Python用戶,對於嘗試對其進行雙擊的語法不夠熟悉。

----更新2 ---- 錯誤是

File "review.py", line 17 
    for i in word_features}, tag) 
    ^
SyntaxError: invalid syntax` 
+0

我寧願用我的方式來提取每個文件的類別。但你可以吃你自己的狗食(http://en.wikipedia.org/wiki/Eating_your_own_dog_food)。關於語法錯誤,您可以發佈控制檯上顯示的錯誤嗎? – alvas

+0

已刪除 - 已添加到原始 – user3128184

+0

您使用的是py2.7及以上版本嗎?由於字典理解,似乎語法失敗 – alvas

回答

13

是,在第6章教程是瞄準的基本知識的學生,並從那裏,學生應該建立在它探索NLTK中的可用內容,以及哪些不可用。所以讓我們一次一個地解決問題。首先,通過目錄獲取'pos'/'neg'文檔的方式很可能是正確的,因爲語料庫是按照這種方式組織的。

from nltk.corpus import movie_reviews as mr 
from collections import defaultdict 

documents = defaultdict(list) 

for i in mr.fileids(): 
    documents[i.split('/')[0]].append(i) 

print documents['pos'][:10] # first ten pos reviews. 
print 
print documents['neg'][:10] # first ten neg reviews. 

[OUT]:

['pos/cv000_29590.txt', 'pos/cv001_18431.txt', 'pos/cv002_15918.txt', 'pos/cv003_11664.txt', 'pos/cv004_11636.txt', 'pos/cv005_29443.txt', 'pos/cv006_15448.txt', 'pos/cv007_4968.txt', 'pos/cv008_29435.txt', 'pos/cv009_29592.txt'] 

['neg/cv000_29416.txt', 'neg/cv001_19502.txt', 'neg/cv002_17424.txt', 'neg/cv003_12683.txt', 'neg/cv004_12641.txt', 'neg/cv005_29357.txt', 'neg/cv006_17022.txt', 'neg/cv007_4992.txt', 'neg/cv008_29326.txt', 'neg/cv009_29417.txt'] 

可替換地,我喜歡元組的列表,其中所述第一元件是是的話在.txt文件的列表和第二是類別。雖然這樣做也刪除禁用詞和標點符號:

from nltk.corpus import movie_reviews as mr 
import string 
from nltk.corpus import stopwords 
stop = stopwords.words('english') 
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()] 

接下來是FreqDist(for w in movie_reviews.words() ...)錯誤。你的代碼沒有問題,只是你應該嘗試使用命名空間(參見http://en.wikipedia.org/wiki/Namespace#Use_in_common_languages)。下面的代碼:

from nltk.corpus import movie_reviews as mr 
from nltk.probability import FreqDist 
from nltk.corpus import stopwords 
import string 
stop = stopwords.words('english') 

all_words = FreqDist(w.lower() for w in mr.words() if w.lower() not in stop and w.lower() not in string.punctuation) 

print all_words 

[輸出]:

<FreqDist: 'film': 9517, 'one': 5852, 'movie': 5771, 'like': 3690, 'even': 2565, 'good': 2411, 'time': 2411, 'story': 2169, 'would': 2109, 'much': 2049, ...> 

由於上面的代碼正確打印FreqDist,錯誤好像你沒有在nltk_data/目錄中的文件。

事實上,你有fic/11.txt表明你正在使用一些老版本的NLTK或NLTK語料庫。通常movie_reviews中的fileidspos/neg開始,然後是斜線,然後是文件名,最後是.txt,例如, pos/cv001_18431.txt

所以我想,也許你應該重新下載與文件:

$ python 
>>> import nltk 
>>> nltk.download() 

然後確保該電影評論語料語料庫選項卡下正常下載:

MR dl

返回循環遍歷電影評論語料庫中所有單詞的代碼似乎是多餘的,如果您已經在文檔中過濾了所有單詞,那麼我寧願這樣做以提取所有功能集:

word_features = FreqDist(chain(*[i for i,j in documents])) 
word_features = word_features.keys()[:100] 

featuresets = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents] 

接下來,功能分割火車/測試是好的,但我認爲這是更好地使用文件,所以不是這樣的:

featuresets = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents] 
train_set, test_set = featuresets[100:], featuresets[:100] 

我會推薦這個代替:

numtrain = int(len(documents) * 90/100) 
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]] 
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]] 

然後將數據提供給分類器,瞧!因此,這裏是沒有的意見和演練代碼:

import string 
from itertools import chain 

from nltk.corpus import movie_reviews as mr 
from nltk.corpus import stopwords 
from nltk.probability import FreqDist 
from nltk.classify import NaiveBayesClassifier as nbc 
import nltk 

stop = stopwords.words('english') 
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()] 

word_features = FreqDist(chain(*[i for i,j in documents])) 
word_features = word_features.keys()[:100] 

numtrain = int(len(documents) * 90/100) 
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]] 
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]] 

classifier = nbc.train(train_set) 
print nltk.classify.accuracy(classifier, test_set) 
classifier.show_most_informative_features(5) 

[出]:

0.655 
Most Informative Features 
        bad = True    neg : pos =  2.0 : 1.0 
        script = True    neg : pos =  1.5 : 1.0 
        world = True    pos : neg =  1.5 : 1.0 
       nothing = True    neg : pos =  1.5 : 1.0 
        bad = False    pos : neg =  1.5 : 1.0 
+0

我明白了。但是我得到的一個奇怪的結果是,樸素貝葉斯結果給出了0,16到0.17的答案,我覺得這很奇怪。任何可能的原因爲什麼會發生? – Arqam

+0

alvas我嘗試了相同的代碼。但是,我只得到0,16爲什麼? – 2016-12-23 16:58:50