我正在做一些基本的使用scala的spark。爲什麼計數函數不適用於Spark中的mapvalues?
我想知道爲什麼計數功能不與mapValues和地圖功能
工作當我申請總和,最小值,最大值然後它的作品。也有沒有什麼地方我可以參考groupbykeyRDD中可應用於Iterable [String]上的所有適用函數?
mycode的:
scala> val records = List("CHN|2", "CHN|3" , "BNG|2","BNG|65")
records: List[String] = List(CHN|2, CHN|3, BNG|2, BNG|65)
scala> val recordsRDD = sc.parallelize(records)
recordsRDD: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[119] at parallelize at <console>:23
scala> val mapRDD = recordsRDD.map(elem => elem.split("\\|"))
mapRDD: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[120] at map at <console>:25
scala> val keyvalueRDD = mapRDD.map(elem => (elem(0),elem(1)))
keyvalueRDD: org.apache.spark.rdd.RDD[(String, String)] = MapPartitionsRDD[121] at map at <console>:27
scala> val groupbykeyRDD = keyvalueRDD.groupByKey()
groupbykeyRDD: org.apache.spark.rdd.RDD[(String, Iterable[String])] = ShuffledRDD[122] at groupByKey at <console>:29
scala> groupbykeyRDD.mapValues(elem => elem.count).collect
<console>:32: error: missing arguments for method count in trait TraversableOnce;
follow this method with `_' if you want to treat it as a partially applied function
groupbykeyRDD.mapValues(elem => elem.count).collect
^
scala> groupbykeyRDD.map(elem => (elem._1 ,elem._2.count)).collect
<console>:32: error: missing arguments for method count in trait TraversableOnce;
follow this method with `_' if you want to treat it as a partially applied function
groupbykeyRDD.map(elem => (elem._1 ,elem._2.count)).collect
預期輸出:
Array((CHN,2) ,(BNG,2))
我根據你的答案應用了大小方法,它工作。 –