若干年後弄清楚它是如何工作的,這裏的
如何創建NLTK語料庫與TEXTFILES的目錄更新教程?
主要思想是利用nltk.corpus.reader包。如果您有一個英文中的文本文件目錄,最好使用PlaintextCorpusReader。
如果你有一個目錄,看起來像這樣:
newcorpus/
file1.txt
file2.txt
...
只需用幾行代碼,你可以得到一個文集:
import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader
corpusdir = 'newcorpus/' # Directory of corpus.
newcorpus = PlaintextCorpusReader(corpusdir, '.*')
注:的PlaintextCorpusReader
將使用默認nltk.tokenize.sent_tokenize()
和nltk.tokenize.word_tokenize()
將你的文本分割成句子和單詞,這些功能都是爲英文而建立的,它可能是不是適用於所有人語言。
下面是與創建測試TEXTFILES的,以及如何創建一個語料庫與NLTK以及如何在不同級別訪問語料庫的完整代碼:
import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader
# Let's create a corpus with 2 texts in different textfile.
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
corpus = [txt1,txt2]
# Make new dir for the corpus.
corpusdir = 'newcorpus/'
if not os.path.isdir(corpusdir):
os.mkdir(corpusdir)
# Output the files into the directory.
filename = 0
for text in corpus:
filename+=1
with open(corpusdir+str(filename)+'.txt','w') as fout:
print>>fout, text
# Check that our corpus do exist and the files are correct.
assert os.path.isdir(corpusdir)
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
assert open(corpusdir+infile,'r').read().strip() == text.strip()
# Create a new corpus by specifying the parameters
# (1) directory of the new corpus
# (2) the fileids of the corpus
# NOTE: in this case the fileids are simply the filenames.
newcorpus = PlaintextCorpusReader('newcorpus/', '.*')
# Access each file in the corpus.
for infile in sorted(newcorpus.fileids()):
print infile # The fileids of each file.
with newcorpus.open(infile) as fin: # Opens the file.
print fin.read().strip() # Prints the content of the file
print
# Access the plaintext; outputs pure string/basestring.
print newcorpus.raw().strip()
print
# Access paragraphs in the corpus. (list of list of list of strings)
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and
# nltk.tokenize.word_tokenize.
#
# Each element in the outermost list is a paragraph, and
# Each paragraph contains sentence(s), and
# Each sentence contains token(s)
print newcorpus.paras()
print
# To access pargraphs of a specific fileid.
print newcorpus.paras(newcorpus.fileids()[0])
# Access sentences in the corpus. (list of list of strings)
# NOTE: That the texts are flattened into sentences that contains tokens.
print newcorpus.sents()
print
# To access sentences of a specific fileid.
print newcorpus.sents(newcorpus.fileids()[0])
# Access just tokens/words in the corpus. (list of strings)
print newcorpus.words()
# To access tokens of a specific fileid.
print newcorpus.words(newcorpus.fileids()[0])
最後,閱讀文本的目錄,並創建一個NLTK語料庫在另一語言,你必須首先確保你有一個Python可調用字標記化和句子切分模塊採用串/即basestring輸入,併產生這樣的輸出:
>>> from nltk.tokenize import sent_tokenize, word_tokenize
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
>>> sent_tokenize(txt1)
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
>>> word_tokenize(sent_tokenize(txt1)[0])
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']
感謝您的解釋。得到它了。但我如何輸出分割的句子到一個單獨的txt文件? – alvas 2011-02-10 08:22:13
這兩個鏈接錯誤,404。可以一些甜蜜的靈魂更新鏈接? – mtk 2016-08-19 23:14:16
修復了第一個鏈接。不知道第二個人曾經指出過什麼文件。 – alexis 2017-10-31 17:55:58