2016-12-14 46 views
1

我想學習火花數據集(spark 2.0.1)。在左外部連接之下創建空指針異常。空指針異常 - Apache Spark數據集左外連接

case class Employee(name: String, age: Int, departmentId: Int, salary: Double) 
case class Department(id: Int, depname: String) 
case class Record(name: String, age: Int, salary: Double, departmentId: Int, departmentName: String) 
val employeeDataSet = sc.parallelize(Seq(Employee("Jax", 22, 5, 100000.0),Employee("Max", 22, 1, 100000.0))).toDS() 
val departmentDataSet = sc.parallelize(Seq(Department(1, "Engineering"), Department(2, "Marketing"))).toDS() 

val averageSalaryDataset = employeeDataset.joinWith(departmentDataSet, $"departmentId" === $"id", "left_outer") 
           .map(record => Record(record._1.name, record._1.age, record._1.salary, record._1.departmentId , record._2.depname)) 

averageSalaryDataset.show() 

16/12/14 16時48分26秒ERROR執行人:異常在任務0.0在階段2.0(TID 12) 顯示java.lang.NullPointerException

這是因爲,在做左外加入它爲record._2.depname提供空值。

如何處理?由於

回答

0

使用解決了這個---

val averageSalaryDataset1 = employeeDataSet.joinWith(departmentDataSet, $"departmentId" === $"id", "left_outer").selectExpr("nvl(_1.name, ' ') as name","nvl(_1.age, 0) as age","nvl(_1.salary, 0.0D) as salary","nvl(_1.departmentId, 0) as departmentId","nvl(_2.depname, ' ') as departmentName").as[Record] 
averageSalaryDataset1.show() 
+0

雖然這可能會工作,它是一個非常貧窮的解決方案:O!我不明白爲什麼加入並不回饋的選項案例分類很容易檢查。 – Sparky

0

空可使用的if..else條件進行處理。

val averageSalaryDataset = employeeDataSet.joinWith(departmentDataSet, $"departmentId" === $"id", "left_outer").map(record => Record(record._1.name, record._1.age, record._1.salary, record._1.departmentId , if (record._2 == null) null else record._2.depname)) 

連接操作後,產生的數據集列存儲的映射(鍵值對),並在地圖操作,我們呼籲的鑰匙,但是當您呼叫記錄中的關鍵是「空」。 _2.depName這就是爲什麼例外

val averageSalaryDataset = employeeDataSet.joinWith(departmentDataSet, $"departmentId" === $"id", "left_outer") 

Dataset after left join