2016-07-07 110 views
-2

我正在嘗試使用spark KMeans提交spark任務。我正確地打包了scala文件,但是當我想提交作業時,我總是遇到ClassNotFoundException。 這裏是我的SBT並祝:spark執行kmeans的ClassNotFoundException錯誤

名:= 「sparkKmeans」

libraryDependencies + = 「org.apache.spark」 %% 「火花核」 % 「1.1.1」

,這裏是我的斯卡拉類:

import org.apache.spark.SparkContext 
import org.apache.spark.SparkContext._ 
import org.apache.spark.SparkConf 
import org.apache.spark.mllib.clustering.{KMeans, KMeansModel} 
import org.apache.spark.mllib.linalg.Vectors 
object sparkKmeans { 
    def main(args: Array[String]) { 
// create Spark context with Spark configuration 
val sc = new SparkContext(new SparkConf().setAppName("SparkKmeans"))  
//val threshold = args(1).toInt 
// Load and parse the data. source is the first argument. 
    val data = sc.textFile(args(0))  
    val parsedData = data.map(s => Vectors.dense(s.split(' ').map(_.toDouble))).cache() 
    // Cluster the data into classes using KMeans. number of itteration is fixed as 100 
    // and number of clusters is get from the input -second argument 
    val numClusters = args(1) 
    val numIterations = 100 
    val clusters = KMeans.train(parsedData, numClusters, numIterations) 

    // Evaluate clustering by computing Within Set Sum of Squared Errors 
    val WSSSE = clusters.computeCost(parsedData) 
    println("Within Set Sum of Squared Errors = " + WSSSE) 

    // Save and load model based on thirs argument. 
    //clusters.save(sc, args(2)) 
    // val sameModel = KMeansModel.load(sc, args(2))  
    } 
} 

我已經因爲我看到一些地方最後兩行評論說,火花與串行問題。但仍然有問題。

,這裏是錯誤:

java.lang.ClassNotFoundException: sparkKmeans 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366) 
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) 
at java.security.AccessController.doPrivileged(Native Method) 
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) 
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
at java.lang.Class.forName0(Native Method) 
at java.lang.Class.forName(Class.java:278) 
at org.apache.spark.util.Utils$.classForName(Utils.scala:174) 
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689) 
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

,並提交使用作業:

./bin/spark-shell --class sparkKmeans ...... 

如果有人能幫助我,我將不勝感激。

+0

構建定義中缺少'spark-mllib'依賴項。更何況使用Spark 1.1只是瘋狂。目前在1.6/2.0。 – zero323

+0

如何將應用程序打包到jar文件中?你可以在--class之後寫下完整的命令嗎? – Abhi

回答

0

感謝您的意見。 我做了你所說的: Built.sbt文件: 名:

libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.6.1", 
"org.apache.spark" % "spark-mllib_2.10" % "1.6.1" 
) 

(我用2.11.8階和Spark 1.6.1版本,但仍是同樣的錯誤=「sparkKmeans」 兼談其他問題: 我使用的包裝我的應用程序: SBT 編譯 包

和執行使用:

./bin/spark-submit --class sparkKmeans k/kmeans/target/scala-2.10/sparkkmeans_2.10-0.1-SNAPSHOT.jar '/home/meysam/spark-1.6.1/kmeans/pima.csv' 3