我有以下DataSet
,結構如下。如何在Spark數據集中解壓多個鍵
case class Person(age: Int, gender: String, salary: Double)
我想通過gender
和age
確定平均薪水,從而予組DS
通過這兩個鍵。我遇到了兩個主要問題,一個是兩個鍵都混在一個鍵中,但我想將它們保存在兩個不同的列中,另一個是aggregated
列的名稱很長,我無法弄清楚如何重命名它(顯然as
和alias
不會工作)所有這一切使用DS API
。
val df = sc.parallelize(List(Person(100000.00, "male", 27),
Person(120000.00, "male", 27),
Person(95000, "male", 26),
Person(89000, "female", 31),
Person(250000, "female", 51),
Person(120000, "female", 51)
)).toDF.as[Person]
df.groupByKey(p => (p.gender, p.age)).agg(typed.avg(_.salary)).show()
+-----------+------------------------------------------------------------------------------------------------+
| key| TypedAverage(line2503618a50834b67a4b132d1b8d2310b12.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$Person)|
+-----------+------------------------------------------------------------------------------------------------+
|[female,31]| 89000.0...
|[female,51]| 185000.0...
| [male,27]| 110000.0...
| [male,26]| 95000.0...
+-----------+------------------------------------------------------------------------------------------------+