2011-02-09 93 views
62

我認爲我的標題的答案經常是去閱讀文檔,但我跑過NLTK book,但它沒有給出答案。我對python很陌生。用NLTK創建一個新的語料庫

我有一堆.txt文件,我希望能夠使用NLTK爲語料庫nltk_data提供的語料庫函數。

我已經試過PlaintextCorpusReader,但我不能做得比:使用PUNKT

>>>import nltk 
>>>from nltk.corpus import PlaintextCorpusReader 
>>>corpus_root = './' 
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*') 
>>>newcorpus.words() 

我如何細分newcorpus句子?我嘗試使用punkt函數,但punkt函數無法讀取PlaintextCorpusReader類?

你還可以讓我知道如何將分段數據寫入文本文件嗎?

編輯: 這個問題有一次賞金,它現在有第二個賞金。查看賞金箱中的文字。

回答

32

我認爲PlaintextCorpusReader已經使用punkt分詞器分割輸入,至少如果您的輸入語言是英語。

PlainTextCorpusReader's constructor

def __init__(self, root, fileids, 
      word_tokenizer=WordPunctTokenizer(), 
      sent_tokenizer=nltk.data.LazyLoader(
       'tokenizers/punkt/english.pickle'), 
      para_block_reader=read_blankline_block, 
      encoding='utf8'): 

您可以通過讀卡器上的單詞和句子標記生成器,但對於後者默認已經是​​。

對於單個字符串,將使用標記器,如下所示(解釋here,請參閱第5節中的punkt標記器)。

>>> import nltk.data 
>>> text = """ 
... Punkt knows that the periods in Mr. Smith and Johann S. Bach 
... do not mark sentence boundaries. And sometimes sentences 
... can start with non-capitalized words. i is a good variable 
... name. 
... """ 
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') 
>>> tokenizer.tokenize(text.strip()) 
+0

感謝您的解釋。得到它了。但我如何輸出分割的句子到一個單獨的txt文件? – alvas 2011-02-10 08:22:13

+0

這兩個鏈接錯誤,404。可以一些甜蜜的靈魂更新鏈接? – mtk 2016-08-19 23:14:16

+0

修復了第一個鏈接。不知道第二個人曾經指出過什麼文件。 – alexis 2017-10-31 17:55:58

9
>>> import nltk 
>>> from nltk.corpus import PlaintextCorpusReader 
>>> corpus_root = './' 
>>> newcorpus = PlaintextCorpusReader(corpus_root, '.*') 
""" 
if the ./ dir contains the file my_corpus.txt, then you 
can view say all the words it by doing this 
""" 
>>> newcorpus.words('my_corpus.txt') 
44

若干年後弄清楚它是如何工作的,這裏的

如何創建NLTK語料庫與TEXTFILES的目錄更新教程?

主要思想是利用nltk.corpus.reader包。如果您有一個英文中的文本文件目錄,最好使用PlaintextCorpusReader

如果你有一個目錄,看起來像這樣:

newcorpus/ 
     file1.txt 
     file2.txt 
     ... 

只需用幾行代碼,你可以得到一個文集:

import os 
from nltk.corpus.reader.plaintext import PlaintextCorpusReader 

corpusdir = 'newcorpus/' # Directory of corpus. 

newcorpus = PlaintextCorpusReader(corpusdir, '.*') 

注:PlaintextCorpusReader將使用默認nltk.tokenize.sent_tokenize()nltk.tokenize.word_tokenize()將你的文本分割成句子和單詞,這些功能都是爲英文而建立的,它可能是不是適用於所有人語言。

下面是與創建測試TEXTFILES的,以及如何創建一個語料庫與NLTK以及如何在不同級別訪問語料庫的完整代碼:

import os 
from nltk.corpus.reader.plaintext import PlaintextCorpusReader 

# Let's create a corpus with 2 texts in different textfile. 
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus.""" 
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n""" 
corpus = [txt1,txt2] 

# Make new dir for the corpus. 
corpusdir = 'newcorpus/' 
if not os.path.isdir(corpusdir): 
    os.mkdir(corpusdir) 

# Output the files into the directory. 
filename = 0 
for text in corpus: 
    filename+=1 
    with open(corpusdir+str(filename)+'.txt','w') as fout: 
     print>>fout, text 

# Check that our corpus do exist and the files are correct. 
assert os.path.isdir(corpusdir) 
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus): 
    assert open(corpusdir+infile,'r').read().strip() == text.strip() 


# Create a new corpus by specifying the parameters 
# (1) directory of the new corpus 
# (2) the fileids of the corpus 
# NOTE: in this case the fileids are simply the filenames. 
newcorpus = PlaintextCorpusReader('newcorpus/', '.*') 

# Access each file in the corpus. 
for infile in sorted(newcorpus.fileids()): 
    print infile # The fileids of each file. 
    with newcorpus.open(infile) as fin: # Opens the file. 
     print fin.read().strip() # Prints the content of the file 
print 

# Access the plaintext; outputs pure string/basestring. 
print newcorpus.raw().strip() 
print 

# Access paragraphs in the corpus. (list of list of list of strings) 
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
#  nltk.tokenize.word_tokenize. 
# 
# Each element in the outermost list is a paragraph, and 
# Each paragraph contains sentence(s), and 
# Each sentence contains token(s) 
print newcorpus.paras() 
print 

# To access pargraphs of a specific fileid. 
print newcorpus.paras(newcorpus.fileids()[0]) 

# Access sentences in the corpus. (list of list of strings) 
# NOTE: That the texts are flattened into sentences that contains tokens. 
print newcorpus.sents() 
print 

# To access sentences of a specific fileid. 
print newcorpus.sents(newcorpus.fileids()[0]) 

# Access just tokens/words in the corpus. (list of strings) 
print newcorpus.words() 

# To access tokens of a specific fileid. 
print newcorpus.words(newcorpus.fileids()[0]) 

最後,閱讀文本的目錄,並創建一個NLTK語料庫在另一語言,你必須首先確保你有一個Python可調用字標記化句子切分模塊採用串/即basestring輸入,併產生這樣的輸出:

>>> from nltk.tokenize import sent_tokenize, word_tokenize 
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus.""" 
>>> sent_tokenize(txt1) 
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.'] 
>>> word_tokenize(sent_tokenize(txt1)[0]) 
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.'] 
相關問題