2017-01-02 143 views
0

Spark SQL對我來說很清楚。但是,我剛開始使用Spark的RDD API。作爲spark apply function to columns in parallel指出,這應該讓我獲得spark將spark-SQL轉換爲RDD API

def handleBias(df: DataFrame, colName: String, target: String = this.target) = { 
    val w1 = Window.partitionBy(colName) 
    val w2 = Window.partitionBy(colName, target) 

    df.withColumn("cnt_group", count("*").over(w2)) 
     .withColumn("pre2_" + colName, mean(target).over(w1)) 
     .withColumn("pre_" + colName, coalesce(min(col("cnt_group")/col("cnt_foo_eq_1")).over(w1), lit(0D))) 
     .drop("cnt_group") 
    } 
} 

擺脫緩慢的洗牌在僞代碼:df foreach column (handleBias(column) 所以最小的數據幀裝起來

val input = Seq(
    (0, "A", "B", "C", "D"), 
    (1, "A", "B", "C", "D"), 
    (0, "d", "a", "jkl", "d"), 
    (0, "d", "g", "C", "D"), 
    (1, "A", "d", "t", "k"), 
    (1, "d", "c", "C", "D"), 
    (1, "c", "B", "C", "D") 
) 
    val inputDf = input.toDF("TARGET", "col1", "col2", "col3TooMany", "col4") 

但沒有正確映射

val rdd1_inputDf = inputDf.rdd.flatMap { x => {(0 until x.size).map(idx => (idx, x(idx)))}} 
     rdd1_inputDf.toDF.show 

它失敗

java.lang.ClassNotFoundException: scala.Any 
java.lang.ClassNotFoundException: scala.Any 

對於此問題中概述的問題,可以找到一個示例https://github.com/geoHeil/sparkContrastCodinghttps://github.com/geoHeil/sparkContrastCoding/blob/master/src/main/scala/ColumnParallel.scala

回答

2

當你調用一個DataFrame.rdd你得到一個RDD[Row]這不是強類型。如果您希望能夠映射在元素上,您將需要匹配很多:

scala> val input = Seq(
    |  (0, "A", "B", "C", "D"), 
    |  (1, "A", "B", "C", "D"), 
    |  (0, "d", "a", "jkl", "d"), 
    |  (0, "d", "g", "C", "D"), 
    |  (1, "A", "d", "t", "k"), 
    |  (1, "d", "c", "C", "D"), 
    |  (1, "c", "B", "C", "D") 
    | ) 
input: Seq[(Int, String, String, String, String)] = List((0,A,B,C,D), (1,A,B,C,D), (0,d,a,jkl,d), (0,d,g,C,D), (1,A,d,t,k), (1,d,c,C,D), (1,c,B,C,D)) 

scala> val inputDf = input.toDF("TARGET", "col1", "col2", "col3TooMany", "col4") 
inputDf: org.apache.spark.sql.DataFrame = [TARGET: int, col1: string ... 3 more fields] 

scala> import org.apache.spark.sql.Row 
import org.apache.spark.sql.Row 

scala> val rowRDD = inputDf.rdd 
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at rdd at <console>:27 

scala> val typedRDD = rowRDD.map{case Row(a: Int, b: String, c: String, d: String, e: String) => (a,b,c,d,e)} 
typedRDD: org.apache.spark.rdd.RDD[(Int, String, String, String, String)] = MapPartitionsRDD[20] at map at <console>:29 

scala> typedRDD.keyBy(_._1).groupByKey.foreach{println} 
[Stage 7:>               (0 + 0)/4] 
(0,CompactBuffer((A,B,C,D), (d,a,jkl,d), (d,g,C,D))) 
(1,CompactBuffer((A,B,C,D), (A,d,t,k), (d,c,C,D), (c,B,C,D))) 

否則,你可以使用一個類型Dataset

scala> val ds = input.toDS 
ds: org.apache.spark.sql.Dataset[(Int, String, String, String, String)] = [_1: int, _2: string ... 3 more fields] 

scala> ds.rdd 
res2: org.apache.spark.rdd.RDD[(Int, String, String, String, String)] = MapPartitionsRDD[8] at rdd at <console>:30 

scala> ds.rdd.keyBy(_._1).groupByKey.foreach{println} 
[Stage 0:>               (0 + 0)/4] 
(0,CompactBuffer((0,A,B,C,D), (0,d,a,jkl,d), (0,d,g,C,D))) 
(1,CompactBuffer((1,A,B,C,D), (1,A,d,t,k), (1,d,c,C,D), (1,c,B,C,D))) 
+1

正如我想在毫升使用此.Pipeline和輸出步驟是DataFrame的「模式丟失​​」,例如我將需要使用模式匹配?它是否正確?但有很多列是否有一種方法來「推斷」它們(部分shcema? –

+0

是的DF => RDD轉換不會使用架構根本不幸的是(我不認爲有這是一種強制使用它的好方法)但是,看一下我的新數據集示例:不需要使用中間數據框Dataframe,它看起來像DataSet可以很好地推斷類型(在Spark 2.0中我認爲任何你可以用DF做的事情也可以用DS來完成) –

+0

@GeorgHeiler(不知道你是否被告知了^^^^) –

相關問題