我真的使用火花1.2.1與從配置運行三個工作人員,並通過使用運行每天工作三個節點:星火高可用性
./spark-1.2.1/sbin/start-all.sh
//crontab configuration:
./spark-1.2.1/bin/spark-submit --master spark://11.11.11.11:7077 --driver-class-path home/ubuntu/spark-cassandra-connector-java-assembly-1.2.1-FAT.jar --class "$class" "$jar"
我想保持火花主從工人提供在任何時候,即使它失敗了,我需要它重新啓動,就像服務(像cassandra一樣)。
有沒有辦法做到這一點?
編輯:
我看着start-all.sh腳本,它僅包含start-master.sh腳本,腳本start-slaves.sh設置。 我試圖爲它創建一個主管配置文件,只得到了以下錯誤:
11.11.11.11: ssh: connect to host 11.11.11.12 port 22: No route to host
11.11.11.13: org.apache.spark.deploy.worker.Worker running as process 14627. Stop it first.
11.11.11.11: ssh: connect to host 11.11.11.12 port 22: No route to host
11.11.11.12: ssh: connect to host 11.11.11.13 port 22: No route to host
11.11.11.11: org.apache.spark.deploy.worker.Worker running as process 14627. Stop it first.
11.11.11.12: ssh: connect to host 11.11.11.12 port 22: No route to host
11.11.11.13: ssh: connect to host 11.11.11.13 port 22: No route to host
11.11.11.11: org.apache.spark.deploy.worker.Worker running as process 14627. Stop it first.
你需要一個mesos或者chronos集羣,或者類似的東西 – eliasah