2013-12-20 176 views
1

我是Scala和Spark的新手。爲什麼lines.map不起作用,但spark.take.map在Spark中起作用?

我在練習SparkHdfsLR.scala code

但我跑進與這部分代碼的一個問題:

60 val lines = sc.textFile(inputPath) 
61 val points = lines.map(parsePoint _).cache() 
62 val ITERATIONS = args(2).toInt 

線61不起作用。之後我把它改成這樣:

60 val lines = sc.textFile(inputPath) 
61 val points = lines.take(149800).map(parsePoint _) //149800 is the total number of lines 
62 val ITERATIONS = args(2).toInt 

從SBT運行錯誤味精是:

[error] (run-main) org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 times 
org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 times 
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760) 
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758) 
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60) 
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758) 
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:379) 
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441) 
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149) 
java.lang.RuntimeException: Nonzero exit code: 1 
at scala.sys.package$.error(package.scala:27) 
[error] {file:/var/sdb/home/tim.tan/workspace/spark/}default-d3d73f/compile:run: Nonzero exit code: 1 
[error] Total time: 52 s, completed Dec 20, 2013 5:42:18 PM 

任務節點的性病的錯誤是:

13/12/20 17:42:16 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started 
13/12/20 17:42:16 INFO executor.StandaloneExecutorBackend: Connecting to driver: akka://[email protected]:38975/user/StandaloneScheduler 
13/12/20 17:42:17 INFO executor.StandaloneExecutorBackend: Successfully registered with driver 
13/12/20 17:42:17 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started 
13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to BlockManagerMaster: akka://[email protected]:38975/user/BlockManagerMaster 
13/12/20 17:42:17 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB. 
13/12/20 17:42:17 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20131220174217-be8e 
13/12/20 17:42:17 INFO network.ConnectionManager: Bound socket to port 52043 with id = ConnectionManagerId(TS-BH90,52043) 
13/12/20 17:42:17 INFO storage.BlockManagerMaster: Trying to register BlockManager 
13/12/20 17:42:17 INFO storage.BlockManagerMaster: Registered BlockManager 
13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to MapOutputTracker: akka://[email protected]:38975/user/MapOutputTracker 
13/12/20 17:42:17 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1b1a6c0b-965e-4834-a3d3-554c95442041 
13/12/20 17:42:17 INFO server.Server: jetty-7.x.y-SNAPSHOT 
13/12/20 17:42:17 INFO server.AbstractConnector: Started [email protected]:41811 
13/12/20 17:42:18 ERROR executor.StandaloneExecutorBackend: Driver terminated or disconnected! Shutting down. 

日誌中工人如下:

13/12/19 17:49:26 INFO worker.Worker: Asked to launch executor app-20131219174926-0001/2 for SparkHdfsLR 
13/12/19 17:49:26 INFO worker.ExecutorRunner: Launch command: "java" "-cp" ":/var/bh/spark/conf:/var/bh/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop1.0.3.jar:/var/bh/spark/core/target/scala-2.9.3/test-classes:/var/bh/spark/repl/target/scala-2.9.3/test-classes:/var/bh/spark/mllib/target/scala-2.9.3/test-classes:/var/bh/spark/bagel/target/scala-2.9.3/test-classes:/var/bh/spark/streaming/target/scala-2.9.3/test-classes" "-Djava.library.path=/var/bh/hadoop/lib/native/Linux-amd64-64/" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://[email protected]:56158/user/StandaloneScheduler" "2" "TS-BH87" "8" 
13/12/19 17:49:30 INFO worker.Worker: Asked to kill executor app-20131219174926-0001/2 
13/12/19 17:49:30 INFO worker.ExecutorRunner: Runner thread for executor app-20131219174926-0001/2 interrupted 
13/12/19 17:49:30 INFO worker.ExecutorRunner: Killing process! 

它看起來像e工人負載未成功啓動。

我不知道爲什麼。有沒有人可以給我一個建議?

+2

請指定'lines'的類型。 – senia

+1

@senia它是[RDD](https://spark.incubator.apache.org/docs/0.6.0/api/core/spark/RDD.html) –

+0

你是什麼意思的「不工作」? –

回答

0

我發現它爲什麼不起作用。

由於配置錯誤,spark只能在standalone模式下工作。更正配置,如果你想在分佈式模式下運行的代碼,最後兩個參數必須是具體的功能SparkContext:

new SparkContext(master, jobName, [sparkHome], [jars]) 

如果最後兩個參數是不特定,斯卡拉腳本只能工作在獨立模式。

相關問題