2016-11-09 31 views
-1

我有這個輸入數據框列的列表成型後groupByKey或GROUPBY

input_df:

| C1 | C2 | C3 |
| ------------- |
| A | 1 | 12/06/2012 |
| A | 2 | 13/06/2012 |
| B | 3 | 12/06/2012 |
| B | 4 | 17/06/2012 |
| C | 5 | 14/06/2012 |
| ---------- |

和轉換之後,我想這樣的數據幀由C1分組和創建C4列至極是由C2和C3夫婦的列表形式

output_df:

| C1 | C4 |
| --------------------------------------------- |
| A | (2012年12月1日),(2,12/06/2012)|
| B | (3,12/06/2012),(4,12/06/2012)|
| C | (5,12/06/2012)|
| --------------------------------------------- |

我appoach的結果,當我試試這個:

val output_df = input_df.map(x => (x(0), (x(1), x(2)))).groupByKey() 

我得到這個結果

(A,CompactBuffer((1, 12/06/2012), (2, 13/06/2012)))  
(B,CompactBuffer((3, 12/06/2012), (4, 17/06/2012))) 
(C,CompactBuffer((5, 14/06/2012))) 

但我不知道如何將其轉換成數據幀,如果這是好辦法去做吧。
任何建議是值得歡迎的,甚至有另一種方法

回答

1

//請試試這個

val conf = new SparkConf().setAppName("groupBy").setMaster("local[*]") 
val sc = new SparkContext(conf) 
sc.setLogLevel("WARN") 
val sqlContext = new org.apache.spark.sql.SQLContext(sc) 
import sqlContext.implicits._ 

val rdd = sc.parallelize(
    Seq(("A",1,"12/06/2012"),("A",2,"13/06/2012"),("B",3,"12/06/2012"),("B",4,"17/06/2012"),("C",5,"14/06/2012"))) 

val v1 = rdd.map(x => (x._1, x)) 
val v2 = v1.groupByKey() 
val v3 = v2.mapValues(v => v.toArray) 

val df2 = v3.toDF("aKey","theValues") 
df2.printSchema() 

val first = df2.first 
println (first) 

println (first.getString(0)) 

val values = first.getSeq[Row](1) 

val firstArray = values(0) 

println (firstArray.getString(0)) //B 
println (firstArray.getInt(1)) //3 
println (firstArray.getString(2)) //12/06/2012