2017-10-06 47 views
-1

我有一個火花數據框,它有兩個由函數collect_set組成的列。我想將這兩列的集合組合成一列。我應該怎麼做?他們都是一套串pyspark - 合併2列的集合

。例如我從呼叫collect_set形成2列

Fruits     | Meat 
[Apple,Orange,Pear]   [Beef, Chicken, Pork] 

我如何把它變成:

Food 

[Apple,Orange,Pear, Beef, Chicken, Pork] 

非常感謝您對您的幫助提前

+1

請提供更多的信息,例如數據框的結構與例子 –

回答

0

假設df

+--------------------+--------------------+ 
|    Fruits|    Meat| 
+--------------------+--------------------+ 
|[Pear, Orange, Ap...|[Chicken, Pork, B...| 
+--------------------+--------------------+ 

然後

import itertools 
df.rdd.map(lambda x: [item for item in itertools.chain(x.Fruits, x.Meat)]).collect() 

創建一組Fruits & Meat組合成一組即

[[u'Pear', u'Orange', u'Apple', u'Chicken', u'Pork', u'Beef']] 


希望這有助於!

1

鑑於您有dataframe作爲

+---------------------+---------------------+ 
|Fruits    |Meat     | 
+---------------------+---------------------+ 
|[Pear, Orange, Apple]|[Chicken, Pork, Beef]| 
+---------------------+---------------------+ 

您可以編寫一個udf函數將兩列的集合合併爲一個。

import org.apache.spark.sql.functions._ 
def mergeCols = udf((furits: mutable.WrappedArray[String], meat: mutable.WrappedArray[String]) => furits ++ meat) 

,然後調用udf功能

df.withColumn("Food", mergeCols(col("Fruits"), col("Meat"))).show(false) 

你應該有你的期望的最終dataframe

+---------------------+---------------------+------------------------------------------+ 
|Fruits    |Meat     |Food          | 
+---------------------+---------------------+------------------------------------------+ 
|[Pear, Orange, Apple]|[Chicken, Pork, Beef]|[Pear, Orange, Apple, Chicken, Pork, Beef]| 
+---------------------+---------------------+------------------------------------------+ 
+0

這是與python?我似乎無法找到mutable.WrappedArray – soulless

+0

這是所有scala :) –