2017-05-24 23 views
0

我正在使用spark-shell來試驗Spark的HashPartitioner。錯誤如下所示:type HashPartitioner不是org.apache.spark.sql.SparkSession的成員

scala> val data = sc.parallelize(List((1, 3), (2, 4), (3, 6), (3, 7))) 
data: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24 

scala> val partitionedData = data.partitionBy(new spark.HashPartitioner(2)) 
<console>:26: error: type HashPartitioner is not a member of org.apache.spark.sql.SparkSession 
     val partitionedData = data.partitionBy(new spark.HashPartitioner(2)) 
                 ^

scala> val partitionedData = data.partitionBy(new org.apache.spark.HashPartitioner(2)) 
partitionedData: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[1] at partitionBy at <console>:26 

第二次操作在第三次操作失敗時失敗。爲什麼spark-shell會在org.apache.spark.sql.SparkSession的包中尋找spark.HashPartitioner而不是org.apache.spark?

回答

3

sparkSparkSession對象不是org.apache.spark包。

你應該導入org.apache.spark.HashPartitioner或使用完整的類名,例如:

import org.apache.spark.HashPartitioner 

val partitionedData = data.partitionBy(new HashPartitioner(2))