2014-12-02 170 views
0

我想獲得一個用Scala編寫的Spark 1.1.0程序來工作,但我很難用它。我有一個蜂巢查詢很簡單:Spark提交失敗Hive

select json, score from data 

當我運行從火花殼一切以下命令作品(我需要MYSQL_CONN在驅動程序類路徑,因爲我使用蜂巢與一個MySQL的元數據存儲)

bin/spark-shell --master $SPARK_URL --driver-class-path $MYSQL_CONN 

import org.apache.spark.sql.hive.HiveContext 
val sqlContext = new HiveContext(sc) 
sqlContext.sql("select json from data").map(t => t.getString(0)).take(10).foreach(println) 

我得到十行json就像我想要的。然而,當我運行這個火花提交如下,我得到一個問題

bin/spark-submit --master $SPARK_URL --class spark.Main --driver-class-path $MYSQL_CONN target/spark-testing-1.0-SNAPSHOT.jar 

這裏是我的整個星火計劃

package spark 

import org.apache.spark.sql.hive.HiveContext 
import org.apache.spark.{SparkContext, SparkConf} 

object Main { 
    def main(args: Array[String]) { 
    val sc = new SparkContext(new SparkConf().setAppName("Gathering Data")) 
    val sqlContext = new HiveContext(sc) 
    sqlContext.sql("select json from data").map(t => t.getString(0)).take(10).foreach(println) 
    } 
} 

,這裏是將所得疊層

14/12/01 21:30:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, match1hd17.dc1): java.lang.ClassNotFoundException: spark.Main$$anonfun$main$1 
     java.net.URLClassLoader$1.run(URLClassLoader.java:200) 
     java.security.AccessController.doPrivileged(Native Method) 
     java.net.URLClassLoader.findClass(URLClassLoader.java:188) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:307) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:252) 
     java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) 
     java.lang.Class.forName0(Native Method) 
     java.lang.Class.forName(Class.java:247) 
     org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59) 
     java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575) 
     java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) 
     org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) 
     org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) 
     org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) 
     org.apache.spark.scheduler.Task.run(Task.scala:54) 
     org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) 
     java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
     java.lang.Thread.run(Thread.java:619) 
14/12/01 21:30:10 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job 
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, match1hd12.dc1m): java.lang.ClassNotFoundException: spark.Main$$anonfun$main$1 
     java.net.URLClassLoader$1.run(URLClassLoader.java:200) 
     java.security.AccessController.doPrivileged(Native Method) 
     java.net.URLClassLoader.findClass(URLClassLoader.java:188) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:307) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:252) 
     java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) 
     java.lang.Class.forName0(Native Method) 
     java.lang.Class.forName(Class.java:247) 
     org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59) 
     java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575) 
     java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) 
     org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) 
     org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) 
     org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) 
     org.apache.spark.scheduler.Task.run(Task.scala:54) 
     org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) 
     java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
     java.lang.Thread.run(Thread.java:619) 
Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391) 
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) 
    at akka.actor.ActorCell.invoke(ActorCell.scala:456) 
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) 
    at akka.dispatch.Mailbox.run(Mailbox.scala:219) 
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) 
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 

我已經花了幾個小時了,我不知道爲什麼這隻適用於spark-shell。我查看了單個節點上的stderr輸出,並且它們具有相同的隱藏錯誤消息。如果任何人可以闡明爲什麼這隻適用於spark-shell,而不是spark-submit,那將會很棒。

感謝

UPDATE:

我一直在玩周圍和下面的程序工作正常。

package spark 

import org.apache.spark.sql.hive.HiveContext 
import org.apache.spark.{SparkContext, SparkConf} 

object Main { 
    def main(args: Array[String]) { 
    val sc = new SparkContext(new SparkConf().setAppName("Gathering Data")) 
    val sqlContext = new HiveContext(sc) 
    sqlContext.sql("select json from data").take(10).map(t => t.getString(0)).foreach(println) 
    } 
} 

顯然,這將不是一個大數據量的工作,但它表明,這個問題似乎是在ScehmaRDD.map()函數。

回答

0

似乎火花上下文初始化有問題。

請嘗試下面的代碼:

val sparkConf = new SparkConf().setAppName("Gathering Data"); 
val sc = new SparkContext(sparkConf); 
+0

這並沒有改變錯誤信息都沒有。 – Jon 2014-12-02 06:24:18

+0

我遇到了類似的錯誤,它在spark-shell中執行得很好,但不是來自spark-submit。後來我發現spark上下文配置不正確。而在shell中,它的auto默認是初始化的。 – 2014-12-03 05:24:03

+0

在錯誤消息中,我看到了ClassNotFoundException,我猜可能存在編譯錯誤,因此ClassNotFoundException。無論如何,我會嘗試在我的羣集中的代碼,並讓你知道。 – 2014-12-03 05:38:03