2017-04-08 95 views
0

我有一個RDD數據和數據如下:如何在Spark中分割?

scala> c_data 
res31: org.apache.spark.rdd.RDD[String] = /home/t_csv MapPartitionsRDD[26] at textFile at <console>:25 

scala> c_data.count() 
res29: Long = 45212                

scala> c_data.take(2).foreach(println) 
age;job;marital;education;default;balance;housing;loan;contact;day;month;duration;campaign;pdays;previous;poutcome;y 
58;management;married;tertiary;no;2143;yes;no;unknown;5;may;261;1;-1;0;unknown;no 

我希望將數據分割成另一個RDD,我使用:

scala> val csv_data = c_data.map{x=> 
| val w = x.split(";") 
| val age = w(0) 
| val job = w(1) 
| val marital_stat = w(2) 
| val education = w(3) 
| val default = w(4) 
| val balance = w(5) 
| val housing = w(6) 
| val loan = w(7) 
| val contact = w(8) 
| val day = w(9) 
| val month = w(10) 
| val duration = w(11) 
| val campaign = w(12) 
| val pdays = w(13) 
| val previous = w(14) 
| val poutcome = w(15) 
| val Y = w(16) 
| } 

返回:

csv_data: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[28] at map at <console>:27 

當我查詢csv_data時,它返回Array((),...)。 如何獲取第一行的數據作爲標題和其餘數據? 我在哪裏做錯了?

在此先感謝。

+0

你的map函數沒有返回單元返回類型的任何東西。 –

+0

是的,我明白了。謝謝。 – Arvind

回答

1

你的映射函數返回Unit,所以你映射到一個RDD[Unit]。你可以通過改變你的代碼來獲得你的值的元組

val csv_data = c_data.map{x=> 
    val w = x.split(";") 
    ... 
    val Y = w(16) 
    (w, age, job, marital_stat, education, default, balance, housing, loan, contact, day, month, duration, campaign, pdays, previous, poutcome, Y) 
} 
+0

感謝您的解釋。 – Arvind