2017-05-24 84 views
1

使用pysparksparkr(最好是兩者),我怎麼能得到兩個DataFrame列的交集?例如,在sparkr我有以下DataFrames如何檢查火花中的兩個DataFrame列的交集

newHires <- data.frame(name = c("Thomas", "George", "George", "John"), 
         surname = c("Smith", "Williams", "Brown", "Taylor")) 
salesTeam <- data.frame(name = c("Lucas", "Bill", "George"), 
         surname = c("Martin", "Clark", "Williams")) 
newHiresDF <- createDataFrame(newHires) 
salesTeamDF <- createDataFrame(salesTeam) 

#Intersect works for the entire DataFrames 
newSalesHire <- intersect(newHiresDF, salesTeamDF) 
head(newSalesHire) 

     name surname 
    1 George Williams 

#Intersect does not work for single columns 
newSalesHire <- intersect(newHiresDF$name, salesTeamDF$name) 
head(newSalesHire) 

    Error in as.vector(y) : no method for coercing this S4 class to a vector 

我怎樣才能intersect爲單一的列上工作?

+0

在pyspark工作正常 'spark.createDataFrame([ 「一」, 「B」, 「X」],StringType() ).intersect(spark.createDataFrame([ 「Z」, 「Y」, 「X」],StringType()))' –

回答

2

在相交函數中需要兩個SparkDF。您可以使用選擇功能。

在SparkR:

newSalesHire <- intersect(select(newHiresDF, 'name'), select(salesTeamDF,'name')) 

在pyspark:

newSalesHire = (newHiresDF.select('name')).intersect(salesTeamDF.select('name'))