2017-07-06 118 views
0

我想運行下面的代碼使用sbt包和sbt運行。我遇到了一個對我來說沒有意義的運行時錯誤。該代碼在火花外殼上效果很好。執行computeSVD行時發生錯誤。如果該行被註釋掉,程序工作正常。我已經看到MLlib庫的其他API的類似問題。如果有人能夠提供有關問題的見解,這將是非常好的。使用sbt運行MLlib api的Spark運行時錯誤

代碼:

package com.sracr.test 

import org.apache.spark.{SparkConf, SparkContext} 
import org.apache.spark.mllib.linalg.Matrix 
import org.apache.spark.mllib.linalg.SingularValueDecomposition 
import org.apache.spark.mllib.linalg.Vectors 
import org.apache.spark.mllib.linalg.distributed.RowMatrix 

object Test { 


     def main(args: Array[ String ]) { 

     val conf = new SparkConf().setAppName("MySparkApp").setMaster("spark://127.0.0.1:7077") 

     var ctx : SparkContext = new SparkContext(conf) 

     ctx.addJar("target/scala-2.11/spark-test_2.11-1.0.jar") 

     println("Hello, This is a start!") 

     val data = List(
      Vectors.sparse(5, Seq((1, 1.0), (3, 7.0))), 
      Vectors.dense(2.0, 0.0, 3.0, 4.0, 5.0), 
      Vectors.dense(4.0, 0.0, 0.0, 6.0, 7.0)) 

     val dataRDD = ctx.parallelize(data) 

     val mat: RowMatrix = new RowMatrix(dataRDD) 

     println(mat.numCols()) 

     val svd: SingularValueDecomposition[RowMatrix, Matrix] = mat.computeSVD(2, computeU = true) 

     println(mat.numRows()) 

     println("It Works!!!!!") 

     ctx.stop() 


     } 

} 

錯誤:

$sbt run 
[warn] Executing in batch mode. 
[warn] For better performance, hit [ENTER] to switch to interactive mode, or 
[warn] consider launching sbt without any commands, or explicitly passing 'shell' 
[info] Loading project definition from /Users/ani.das/Projects/spark/MySpark/spark-test/project 
[info] Set current project to spark test (in build file:/Users/ani.das/Projects/spark/MySpark/spark-test/) 
[info] Running com.sracr.test.Test 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
17/07/06 10:51:18 INFO SparkContext: Running Spark version 2.1.0 
17/07/06 10:51:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
17/07/06 10:51:18 WARN Utils: Your hostname, 127.0.0.1 resolves to a loopback address: 127.0.0.1; using 105.145.28.172 instead (on interface en0) 
17/07/06 10:51:18 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 
17/07/06 10:51:18 INFO SecurityManager: Changing view acls to: ani.das 
17/07/06 10:51:18 INFO SecurityManager: Changing modify acls to: ani.das 
17/07/06 10:51:18 INFO SecurityManager: Changing view acls groups to: 
17/07/06 10:51:18 INFO SecurityManager: Changing modify acls groups to: 
17/07/06 10:51:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ani.das); groups with view permissions: Set(); users with modify permissions: Set(ani.das); groups with modify permissions: Set() 
17/07/06 10:51:19 INFO Utils: Successfully started service 'sparkDriver' on port 65196. 
17/07/06 10:51:19 INFO SparkEnv: Registering MapOutputTracker 
17/07/06 10:51:19 INFO SparkEnv: Registering BlockManagerMaster 
17/07/06 10:51:19 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 
17/07/06 10:51:19 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 
17/07/06 10:51:19 INFO DiskBlockManager: Created local directory at /private/var/folders/4c/s3nt_0s96z57zfxc3dlyq3swjl4fq9/T/blockmgr-5317c7b1-ff8d-463a-8405-dd7c3f12074a 
17/07/06 10:51:19 INFO MemoryStore: MemoryStore started with capacity 408.9 MB 
17/07/06 10:51:19 INFO SparkEnv: Registering OutputCommitCoordinator 
17/07/06 10:51:19 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 
17/07/06 10:51:19 INFO Utils: Successfully started service 'SparkUI' on port 4041. 
17/07/06 10:51:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://105.145.28.172:4041 
17/07/06 10:51:19 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://127.0.0.1:7077... 
17/07/06 10:51:19 INFO TransportClientFactory: Successfully created connection to /127.0.0.1:7077 after 31 ms (0 ms spent in bootstraps) 
17/07/06 10:51:19 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170706105119-0007 
17/07/06 10:51:19 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20170706105119-0007/0 on worker-20170706103441-105.145.28.172-64630 (105.145.28.172:64630) with 8 cores 
17/07/06 10:51:19 INFO StandaloneSchedulerBackend: Granted executor ID app-20170706105119-0007/0 on hostPort 105.145.28.172:64630 with 8 cores, 1024.0 MB RAM 
17/07/06 10:51:19 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20170706105119-0007/0 is now RUNNING 
17/07/06 10:51:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 65198. 
17/07/06 10:51:19 INFO NettyBlockTransferService: Server created on 105.145.28.172:65198 
17/07/06 10:51:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 
17/07/06 10:51:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 105.145.28.172, 65198, None) 
17/07/06 10:51:19 INFO BlockManagerMasterEndpoint: Registering block manager 105.145.28.172:65198 with 408.9 MB RAM, BlockManagerId(driver, 105.145.28.172, 65198, None) 
17/07/06 10:51:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 105.145.28.172, 65198, None) 
17/07/06 10:51:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 105.145.28.172, 65198, None) 
17/07/06 10:51:19 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0 
17/07/06 10:51:19 INFO SparkContext: Added JAR target/scala-2.11/spark-test_2.11-1.0.jar at spark://105.145.28.172:65196/jars/spark-test_2.11-1.0.jar with timestamp 1499363479936 
Hello, This is a start! 
17/07/06 10:51:20 INFO SparkContext: Starting job: first at RowMatrix.scala:61 
17/07/06 10:51:20 INFO DAGScheduler: Got job 0 (first at RowMatrix.scala:61) with 1 output partitions 
17/07/06 10:51:20 INFO DAGScheduler: Final stage: ResultStage 0 (first at RowMatrix.scala:61) 
17/07/06 10:51:20 INFO DAGScheduler: Parents of final stage: List() 
17/07/06 10:51:20 INFO DAGScheduler: Missing parents: List() 
17/07/06 10:51:20 INFO DAGScheduler: Submitting ResultStage 0 (ParallelCollectionRDD[0] at parallelize at test.scala:27), which has no missing parents 
17/07/06 10:51:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1408.0 B, free 408.9 MB) 
17/07/06 10:51:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 958.0 B, free 408.9 MB) 
17/07/06 10:51:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 105.145.28.172:65198 (size: 958.0 B, free: 408.9 MB) 
17/07/06 10:51:20 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996 
17/07/06 10:51:20 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (ParallelCollectionRDD[0] at parallelize at test.scala:27) 
17/07/06 10:51:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks 
17/07/06 10:51:21 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(null) (105.145.28.172:65200) with ID 0 
17/07/06 10:51:21 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 105.145.28.172, executor 0, partition 0, PROCESS_LOCAL, 6215 bytes) 
17/07/06 10:51:21 INFO BlockManagerMasterEndpoint: Registering block manager 105.145.28.172:65202 with 366.3 MB RAM, BlockManagerId(0, 105.145.28.172, 65202, None) 
17/07/06 10:51:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 105.145.28.172:65202 (size: 958.0 B, free: 366.3 MB) 
17/07/06 10:51:22 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 503 ms on 105.145.28.172 (executor 0) (1/1) 
17/07/06 10:51:22 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/07/06 10:51:22 INFO DAGScheduler: ResultStage 0 (first at RowMatrix.scala:61) finished in 1.712 s 
17/07/06 10:51:22 INFO DAGScheduler: Job 0 finished: first at RowMatrix.scala:61, took 1.885817 s 
5 
17/07/06 10:51:22 INFO SparkContext: Starting job: treeAggregate at RowMatrix.scala:122 
17/07/06 10:51:22 INFO DAGScheduler: Got job 1 (treeAggregate at RowMatrix.scala:122) with 2 output partitions 
17/07/06 10:51:22 INFO DAGScheduler: Final stage: ResultStage 1 (treeAggregate at RowMatrix.scala:122) 
17/07/06 10:51:22 INFO DAGScheduler: Parents of final stage: List() 
17/07/06 10:51:22 INFO DAGScheduler: Missing parents: List() 
17/07/06 10:51:22 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[1] at treeAggregate at RowMatrix.scala:122), which has no missing parents 
17/07/06 10:51:22 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.6 KB, free 408.9 MB) 
17/07/06 10:51:22 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1560.0 B, free 408.9 MB) 
17/07/06 10:51:22 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 105.145.28.172:65198 (size: 1560.0 B, free: 408.9 MB) 
17/07/06 10:51:22 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:996 
17/07/06 10:51:22 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[1] at treeAggregate at RowMatrix.scala:122) 
17/07/06 10:51:22 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 105.145.28.172, executor 0, partition 0, PROCESS_LOCAL, 6223 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 2, 105.145.28.172, executor 0, partition 1, PROCESS_LOCAL, 6258 bytes) 
17/07/06 10:51:22 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 105.145.28.172:65202 (size: 1560.0 B, free: 366.3 MB) 
17/07/06 10:51:22 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, 105.145.28.172, executor 0): java.lang.NullPointerException 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

17/07/06 10:51:22 INFO TaskSetManager: Lost task 1.0 in stage 1.0 (TID 2) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 1] 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 1.1 in stage 1.0 (TID 3, 105.145.28.172, executor 0, partition 1, PROCESS_LOCAL, 6258 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 4, 105.145.28.172, executor 0, partition 0, PROCESS_LOCAL, 6223 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Lost task 1.1 in stage 1.0 (TID 3) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 2] 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 1.2 in stage 1.0 (TID 5, 105.145.28.172, executor 0, partition 1, PROCESS_LOCAL, 6258 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 4) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 3] 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 6, 105.145.28.172, executor 0, partition 0, PROCESS_LOCAL, 6223 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 6) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 4] 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 7, 105.145.28.172, executor 0, partition 0, PROCESS_LOCAL, 6223 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Lost task 1.2 in stage 1.0 (TID 5) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 5] 
17/07/06 10:51:22 INFO TaskSetManager: Starting task 1.3 in stage 1.0 (TID 8, 105.145.28.172, executor 0, partition 1, PROCESS_LOCAL, 6258 bytes) 
17/07/06 10:51:22 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 7) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 6] 
17/07/06 10:51:22 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job 
17/07/06 10:51:22 INFO TaskSetManager: Lost task 1.3 in stage 1.0 (TID 8) on 105.145.28.172, executor 0: java.lang.NullPointerException (null) [duplicate 7] 
17/07/06 10:51:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/07/06 10:51:22 INFO TaskSchedulerImpl: Cancelling stage 1 
17/07/06 10:51:22 INFO DAGScheduler: ResultStage 1 (treeAggregate at RowMatrix.scala:122) failed in 0.391 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 7, 105.145.28.172, executor 0): java.lang.NullPointerException 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
17/07/06 10:51:22 INFO DAGScheduler: Job 1 failed: treeAggregate at RowMatrix.scala:122, took 0.403468 s 
[error] (run-main-0) org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 7, 105.145.28.172, executor 0): java.lang.NullPointerException 
[error]  at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
[error]  at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
[error]  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
[error]  at org.apache.spark.scheduler.Task.run(Task.scala:99) 
[error]  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
[error]  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[error]  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[error]  at java.lang.Thread.run(Thread.java:745) 
[error] 
[error] Driver stacktrace: 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 7, 105.145.28.172, executor 0): java.lang.NullPointerException 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) 
    at scala.Option.foreach(Option.scala:257) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1981) 
    at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) 
    at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007) 
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1150) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) 
    at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1127) 
    at org.apache.spark.mllib.linalg.distributed.RowMatrix.computeGramianMatrix(RowMatrix.scala:122) 
    at org.apache.spark.mllib.linalg.distributed.RowMatrix.computeSVD(RowMatrix.scala:259) 
    at org.apache.spark.mllib.linalg.distributed.RowMatrix.computeSVD(RowMatrix.scala:194) 
    at com.sracr.test.Test$.main(test.scala:33) 
    at com.sracr.test.Test.main(test.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
Caused by: java.lang.NullPointerException 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2028) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
    at org.apache.spark.scheduler.Task.run(Task.scala:99) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
[trace] Stack trace suppressed: run last compile:run for the full output. 
17/07/06 10:51:22 ERROR ContextCleaner: Error in cleaning thread 
java.lang.InterruptedException 
    at java.lang.Object.wait(Native Method) 
    at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143) 
    at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175) 
    at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245) 
    at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172) 
    at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67) 
17/07/06 10:51:22 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext 
java.lang.InterruptedException 
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) 
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) 
    at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79) 
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78) 
    at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77) 
17/07/06 10:51:22 ERROR Utils: throw uncaught fatal error in thread SparkListenerBus 
java.lang.InterruptedException 
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) 
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) 
    at java.util.concurrent.Semaphore.acquire(Semaphore.java:312) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79) 
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78) 
    at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245) 
    at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77) 
17/07/06 10:51:22 INFO SparkUI: Stopped Spark web UI at http://105.145.28.172:4041 
17/07/06 10:51:22 INFO StandaloneSchedulerBackend: Shutting down all executors 
java.lang.RuntimeException: Nonzero exit code: 1 
    at scala.sys.package$.error(package.scala:27) 
[trace] Stack trace suppressed: run last compile:run for the full output. 
[error] (compile:run) Nonzero exit code: 1 
[error] Total time: 5 s, completed Jul 6, 2017 10:51:22 AM 
17/07/06 10:51:22 INFO DiskBlockManager: Shutdown hook called 
17/07/06 10:51:22 INFO ShutdownHookManager: Shutdown hook called 
17/07/06 10:51:22 INFO ShutdownHookManager: Deleting directory /private/var/folders/4c/s3nt_0s96z57zfxc3dlyq3swjl4fq9/T/spark-4bc93888-b930-4621-9cc8-b44fc3b6bd9e 
17/07/06 10:51:22 INFO ShutdownHookManager: Deleting directory /private/var/folders/4c/s3nt_0s96z57zfxc3dlyq3swjl4fq9/T/spark-4bc93888-b930-4621-9cc8-b44fc3b6bd9e/userFiles-c5140f8e-6edb-4c89-b113-02d406019feb 
+0

此問題是由於版本問題。現在已經解決了。我在Spark 2.1.1中使用了MLlib 2.1.0。 – user2975761

回答

0

此問題是由於版本問題。現在已經解決了。我在Spark 2.1.1中使用了MLlib 2.1.0。