我正在運行spark-1.0.0,方法是連接到具有一個主站和兩個從站的spark獨立羣集。我通過Spark-submit運行wordcount.py,實際上它從HDFS讀取數據並將結果寫入HDFS。到目前爲止,一切都很好,結果將正確寫入HDFS。但事情讓我擔心的是,當我檢查每個工人的Stdout,它是空的我不知道它是否應該是空的?和我按照標準錯誤:對於某些Apache Spark Stderr和Stdout
標準錯誤日誌頁面(APP-20140704174955-0002)
Spark
Executor Command: "java" "-cp" "::
/usr/local/spark-1.0.0/conf:
/usr/local/spark-1.0.0
/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar:/usr/local/hadoop/conf" "
-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.CoarseGrainedExecutorBackend
" "akka.tcp://[email protected]:54477/user/CoarseGrainedScheduler" "0" "slave2" "1
" "akka.tcp://[email protected]:41483/user/Worker" "app-20140704174955-0002"
========================================
14/07/04 17:50:14 ERROR CoarseGrainedExecutorBackend:
Driver Disassociated [akka.tcp://[email protected]:33758] ->
[akka.tcp://[email protected]:54477] disassociated! Shutting down.
這是可行的。您的驅動程序已完成其工作(字數)並斷開連接。 – cloud
標準輸出如何,它是空的,這有意義嗎? – user3789843