如果您使用DataFrame
而不是RDD
s,您可以使用filter
以及布爾型Column
操作。
假設既不val4
也不val5
應爲空。
如果CSV看起來是這樣的:
[email protected] ~ > cat dat_1.csv
header1,header2,header3,header4,header5
val1,val2,val3,val4,val5
val1,val2,,val4,val5
val1,val2,val3,,val5
那麼你的代碼看起來像:
scala> val dat_1 = spark.read.option("header", true).csv("dat_1.csv")
dat_1: org.apache.spark.sql.DataFrame = [header1: string, header2: string ... 3 more fields]
scala> dat_1.show
+-------+-------+-------+-------+-------+
|header1|header2|header3|header4|header5|
+-------+-------+-------+-------+-------+
| val1| val2| val3| val4| val5|
| val1| val2| null| val4| val5|
| val1| val2| val3| null| val5|
+-------+-------+-------+-------+-------+
scala> data1.filter($"header4".isNotNull && $"header5".isNotNull).show
+-------+-------+-------+-------+-------+
|header1|header2|header3|header4|header5|
+-------+-------+-------+-------+-------+
| val1| val2| val3| val4| val5|
| val1| val2| null| val4| val5|
+-------+-------+-------+-------+-------+
否則,如果你的數據是這樣的:
[email protected] ~ > cat dat_2.csv
header1,header2,header3,header4,header5
val1,val2,val3,val4,val5
val1,val2,null,val4,val5
val1,val2,val3,null,val5
那麼你的代碼應該是這樣的:
scala> val dat_2 = spark.read.option("header", true).csv("dat_2.csv")
dat_2: org.apache.spark.sql.DataFrame = [header1: string, header2: string ... 3 more fields]
scala> dat_2.show
+-------+-------+-------+-------+-------+
|header1|header2|header3|header4|header5|
+-------+-------+-------+-------+-------+
| val1| val2| val3| val4| val5|
| val1| val2| null| val4| val5|
| val1| val2| val3| null| val5|
+-------+-------+-------+-------+-------+
scala> dat_2.filter($"header4" =!= "null" && $"header5" =!= "null").show
+-------+-------+-------+-------+-------+
|header1|header2|header3|header4|header5|
+-------+-------+-------+-------+-------+
| val1| val2| val3| val4| val5|
| val1| val2| null| val4| val5|
+-------+-------+-------+-------+-------+
有沒有,你將需要使用RDD任何約束?如果沒有,我認爲,你可以使用DataFrame。 SPARK的DataFrame API最適合處理CSV文件的操作。 – Tawkir