2016-02-09 78 views
0

我有以下RDD(樣品)名/姓:計算TF-IDF在pyspark

names_rdd.take(3) 
[u'Daryll Dickenson', u'Dat Naijaboi', u'Duc Dung Lam'] 

,我試圖來計算tf_idf:

from pyspark.mllib.feature import HashingTF,IDF 
hashingTF = HashingTF() 
tf_names = hashingTF.transform(names_rdd) 
tf_names.cache() 
idf_names =IDF().fit(tf_names) 
tfidf_names = idf_names.transform(tf_names) 

我不明白爲什麼tf_names.take(3)給出了這些結果:

[SparseVector(1048576, {60275: 1.0, 134386: 1.0, 145380: 1.0, 274465: 1.0, 441832: 1.0, 579064: 1.0, 590058: 1.0, 664173: 2.0, 812399: 2.0, 845381: 2.0, 886510: 1.0, 897504: 1.0, 1045730: 1.0}), 
SparseVector(1048576, {208501: 1.0, 274465: 1.0, 441832: 2.0, 515947: 1.0, 537935: 1.0, 845381: 1.0, 886510: 1.0, 897504: 3.0, 971619: 1.0}), 
SparseVector(1048576, {274465: 2.0, 282612: 2.0, 293606: 1.0, 389709: 1.0, 738284: 1.0, 812399: 1.0, 845381: 2.0, 897504: 1.0, 1045730: 1.0})] 

不應該是每行有2個值,例如是這樣的:

[SparseVector(1048576, {60275: 1.0, 134386: 1.0}), 
SparseVector(1048576, {208501: 1.0, 274465: 1.0}), 
SparseVector(1048576, {274365: 2.0, 282612: 2.0})] 

回答

0

我做錯了什麼是我有每一行來拆分單詞並列出它。類似這樣的:

def split_name(name): 
    list_name = name.split(' ') 
    list_name = [word.strip() for word in list_name] 
    return list_name 

names = names_rdd.map(lambda name:split_name(name)) 

hashingTF = HashingTF() 
tf_names = hashingTF.transform(names_rdd) 
        . 
        . 
        .