2015-10-05 22 views
6

我們已經設置了分佈式spark集羣(版本1.5.0)並嘗試運行spark-perf。但是,我們得到了這個錯誤,不知道如何解決它。運行spark-perf時獲取SparkUncaughtExceptionHandler

15/10/05 20:14:37 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main] 
java.util.concurrent.RejectedExecutionException: Task [email protected] rejected from [email protected][Running, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0] 
     at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) 
     at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) 
     at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) 
     at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110) 
     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:96) 
     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:95) 
     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) 
     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) 
     at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) 
     at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) 
     at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) 
     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.tryRegisterAllMasters(AppClient.scala:95) 
     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.org$apache$spark$deploy$client$AppClient$ClientEndpoint$$registerWithMaster(AppClient.scala:121) 
     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:132) 
     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119) 
     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:124) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) 
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) 
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745) 
15/10/05 20:14:37 INFO DiskBlockManager: Shutdown hook called 
15/10/05 20:14:37 INFO ShutdownHookManager: Shutdown hook called 
15/10/05 20:14:37 INFO ShutdownHookManager: Deleting directory /tmp/spark-c5a4a63b-3dc5-4c52-bd2b-e6df22a0c19f 
+0

我有一個類似的問題。你能解決這個問題嗎? – vyakhir

回答

0

請檢查變量SPARK_CLUSTER_URL在配置/ config.py。

SPARK_CLUSTER_URL = "spark://Master_Ip:7077" 

PS:Master_Ip是Master的IP地址,而不是主機名。

0

您尚未正確輸入Spark主站URL。這可能是因爲大寫錯誤。請使用此命令確保文件conf/99-user_defined_properties.confhibench.spark.master的路徑正確。您應該能夠連接到Spark-shell,運行以下命令。

MASTER=<YOUR-SPARK-MASTER-URL-HERE> bin/spark-shell 

在Spark的獨立模式下,該URL應該是這個樣子:

spark://<master-machine-IP>:7077 

一般來說,最好使用主節點的IP地址,而不是由火花主所提供的字母主機名例如spark://Macs-MacBook-Pro.local:7077

1

我用Spark-submit命令中的主機名替換了主IP,並解決了此錯誤。

--master "spark://hostname:7077"