0
映射詞

我開始我的旅程PySpark和我都停留在一個點,例: 我有這樣的代碼:(我把它從https://spark.apache.org/docs/2.1.0/ml-features.htmlPySpark:使用標記生成器

from pyspark.ml.feature import Tokenizer, RegexTokenizer 
from pyspark.sql.functions import col, udf 
from pyspark.sql.types import IntegerType 

sentenceDataFrame = spark.createDataFrame([ 
    (0, "Hi I heard about Spark"), 
    (1, "I wish Java could use case classes"), 
    (2, "Logistic,regression,models,are,neat") 
], ["id", "sentence"]) 

tokenizer = Tokenizer(inputCol="sentence", outputCol="words") 

regexTokenizer = RegexTokenizer(inputCol="sentence", outputCol="words", pattern="\\W") 
# alternatively, pattern="\\w+", gaps(False) 

countTokens = udf(lambda words: len(words), IntegerType()) 

tokenized = tokenizer.transform(sentenceDataFrame) 
tokenized.select("sentence", "words")\ 
    .withColumn("tokens", countTokens(col("words"))).show(truncate=False) 

regexTokenized = regexTokenizer.transform(sentenceDataFrame) 
regexTokenized.select("sentence", "words") \ 
    .withColumn("tokens", countTokens(col("words"))).show(truncate=False) 

而且我加入了這樣的事情:

test = sqlContext.createDataFrame([ 
    (0, "spark"), 
    (1, "java"), 
    (2, "i") 
], ["id", "word"]) 

輸出是:

id |sentence       |words          |tokens| 
+---+-----------------------------------+------------------------------------------+------+ 
|0 |Hi I heard about Spark    |[hi, i, heard, about, spark]    |5  | 
|1 |I wish Java could use case classes |[i, wish, java, could, use, case, classes]|7  | 
|2 |Logistic,regression,models,are,neat|[logistic, regression, models, are, neat] |5  | 

上午I p ossible實現這樣的事情: [ID從「測試」,編號從「regexTokenized」]

2, 0 
2, 1 
1, 1 
0, 1 

從從那裏符號化「字」可以映射「測試」我可以「regexTokenized」虎視眈眈的ID列表在這兩個數據集? 或者應該採取另一種解決方案?

在預先感謝您的任何幫助:)

回答

0

explodejoin

from pyspark.sql.functions import explode 

(testTokenized.alias("train") 
    .select("id", explode("words").alias("word")) 
    .join(
     trainTokenized.select("id", explde("words").alias("word")).alias("test"), 
     "word"))