2015-10-13 54 views
5

我最近開始使用nltk模塊進行文本分析。我被困在一個點上。我想在數據框上使用word_tokenize,以獲取數據框特定行中使用的所有單詞。如何在數據框中使用word_tokenize

data example: 
     text 
1. This is a very good site. I will recommend it to others. 
2. Can you please give me a call at 9983938428. have issues with the listings. 
3. good work! keep it up 
4. not a very helpful site in finding home decor. 

expected output: 

1. 'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.' 
2. 'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings' 
3. 'good','work','!','keep','it','up' 
4. 'not','a','very','helpful','site','in','finding','home','decor' 

基本上,我想分開所有單詞並找到數據框中每個文本的長度。

我知道word_tokenize可以爲它的字符串,但如何將它應用到整個數據框?

請幫忙!

在此先感謝...

+0

您的問題描述缺少數據輸入,您的代碼,您期望的輸出可以充實嗎?謝謝 – EdChum

+0

@EdChum:已編輯查詢。希望它具有所需的信息。 – eclairs

回答

9

您可以使用申請數據框API的方法:

import pandas as pd 
import nltk 

df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']}) 
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1) 

輸出:

>>> df 
              sentences \ 
0 This is a very good site. I will recommend it ... 
1 Can you please give me a call at 9983938428. h... 
2        good work! keep it up 

            tokenized_sents 
0 [This, is, a, very, good, site, ., I, will, re... 
1 [Can, you, please, give, me, a, call, at, 9983... 
2      [good, work, !, keep, it, up] 

爲了找到每個文本的長度儘量使用apply and lambda function again:

df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1) 

>>> df 
              sentences \ 
0 This is a very good site. I will recommend it ... 
1 Can you please give me a call at 9983938428. h... 
2        good work! keep it up 

            tokenized_sents sents_length 
0 [This, is, a, very, good, site, ., I, will, re...   14 
1 [Can, you, please, give, me, a, call, at, 9983...   15 
2      [good, work, !, keep, it, up]    6 
+1

當數據框中有多行時,我們該怎麼做? – eclairs

+0

@eclairs,你是什麼意思? – Gregg

+0

嘗試令牌化時出現此錯誤消息: – eclairs

10

pandas.Series.apply比pandas.DataFrame.apply

import pandas as pd 
import nltk 

df = pd.read_csv("/path/to/file.csv") 

start = time.time() 
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize) 
print "series.apply", (time.time() - start) 

start = time.time() 
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1) 
print "dataframe.apply", (time.time() - start) 

在樣品125 MB csv文件更快,

series.apply 144.428858995

dataframe.apply 201.884778976

編輯:你可以以後series.apply(nltk.word_tokenize)是尺寸較大想着數據幀DF,這可能會影響到運行時進行下一個操作dataframe.apply(nltk.word_tokenize)

大熊貓在這種情況下進行了優化。通過單獨執行dataframe.apply(nltk.word_tokenize),我得到了類似的運行時間200s

相關問題