2014-11-04 43 views
2

對於德語文本使用sent_tokenizer時,出現一些奇怪的行爲。使用nltk語句分詞器和特殊字符的奇怪行爲

示例代碼:

sent_tokenizer = nltk.data.load('tokenizers/punkt/german.pickle') 
for sent in sent_tokenizer.tokenize("Super Qualität. Tolles Teil.") 
     print sent 

這失敗,出現錯誤:

Traceback (most recent call last): 
for sent in sent_tokenize("Super Qualität. Tolles Teil."): 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/__init__.py", line 82, in sent_tokenize 
    return tokenizer.tokenize(text) 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1270, in tokenize 
    return list(self.sentences_from_text(text, realign_boundaries)) 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1318, in sentences_from_text 
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)] 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1309, in span_tokenize 
    return [(sl.start, sl.stop) for sl in slices] 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1348, in _realign_boundaries 
    for sl1, sl2 in _pair_iter(slices): 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 354, in _pair_iter 
    prev = next(it) 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1324, in _slices_from_text 
    if self.text_contains_sentbreak(context): 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1369, in text_contains_sentbreak 
    for t in self._annotate_tokens(self._tokenize_words(text)): 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1504, in _annotate_second_pass 
    for t1, t2 in _pair_iter(tokens): 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 354, in _pair_iter 
    prev = next(it) 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 621, in _annotate_first_pass 
    for aug_tok in tokens: 
    File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 586, in _tokenize_words 
    for line in plaintext.split('\n'): 
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 6: ordinal not in range(128) 

而:

sent_tokenizer = nltk.data.load('tokenizers/punkt/german.pickle') 
    for sent in sent_tokenizer.tokenize("Super Qualität des Produktes. Tolles Teil.") 
     print sent 

作品完美

+0

你是否錯過了函數名後面的「r」? 「'發送sent_tokenize(」SuperQualität。Tolles Teil。「):'' – 2014-11-04 11:07:43

+0

@ Mr.Polywhirl在問題中只是一個錯字:-)。這不是問題。 – Chris 2014-11-04 11:12:15

+3

問題在於包含非ASCII字符的最後一個單詞的句子。但我不知道原因。如果你使用這種方式,你可以使用「Super Qualitt。Tolles Teil」。作品。 – 2014-11-04 12:03:55

回答

4

我找到了解決辦法上nltk homepage

Caution: when tokenizing a Unicode string, make sure you are not using an encoded version of the string (it may be necessary to decode it first, e.g. with s.decode("utf8").

所以

text = "Super Qualität. Tolles Teil." 
sent_tokenizer = nltk.data.load('tokenizers/punkt/german.pickle') 
for sent in sent_tokenizer.tokenize(text.decode('utf8')): 
     print sent 

的作品就像一個魅力。