2016-08-31 153 views
1

我遇到了一些問題,需要使用master和worker啓動Spark羣集。我在Ubuntu 16.04 LTS上下載並安裝了Hadoop 2.7.3和Spark 2.0.0。我做了一個的conf /從站我的奴隸的IP的文件,這是我spark-env.sh無法啓動apache spark獨立羣集

#!/usr/bin/env #bash 

export SPARK_DIST_CLASSPATH=$(hadoop classpath) 


export SPARK_WORKER_CORES=2 

export SPARK_MASTER_IP=192.168.1.6 
export SPARK_LOCAL_IP=192.168.1.6 

export SPARK_YARN_USER_ENV="JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre" 

我使用start-master.sh開始主,這一切都OK。我嘗試運行工作人員時遇到了一些問題。

我試着:

(1) - start-slave.sh spark://192.168.1.6:7077 (from worker) 
(2) - start-slaves.sh (from master) 
(3) - ./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.1.6:7077 (from worker) 

隨着(1)E(2)從設備顯然開始,但在主站:8080它被未示出。使用(3)它拋出這個異常:

16/08/31 14:17:03 INFO worker.Worker: Connecting to master master:7077... 
16/08/31 14:17:03 WARN worker.Worker: Failed to connect to master master:7077 
org.apache.spark.SparkException: Exception thrown in awaitResult 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75) 
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
    at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167) 
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83) 
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88) 
    at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96) 
    at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:216) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.IOException: Failed to connect to master/192.168.1.6:7077 
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228) 
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179) 
    at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197) 
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191) 
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) 
    ... 4 more 
Caused by: java.net.ConnectException: Connection refused: master/192.168.1.6:7077 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) 
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 
    ... 1 more 
16/08/31 14:17:40 ERROR worker.Worker: All masters are unresponsive! Giving up. 

碩士和工人通過安裝使用橋接連接相同的Windows 10主機上的VMware虛擬機託管。

我也關閉了防火牆。

我該怎麼辦?

在此先感謝。

+0

檢查您的主機是否可以訪問您的工作器,反之亦然。 – Ravikumar

回答

1

在日誌:

16/08/31 14:17:03 INFO worker.Worker: Connecting to master master:7077... 

你可以看到,它正在嘗試連接master:7077

確保主主機名解析到指定的IP(192.168.1.6)。

您可以檢查/ etc/hosts文件中的主機名。

+0

主機名解析爲正確的IP。我已經嘗試了,主機名和IP。感謝你的回答。 –

相關問題