2015-11-14 54 views
0

我在VMware Workstation和Namenode配置了1個Namenode和2個Datanode的Apache hadoop集羣工作正常,也做了ssh-passwordless登錄,但是當我嘗試啓動datanode時得到以下錯誤?INFO org.apache.hadoop.ipc.Client:重試連接到服務器:nn1.hcluster.com/192.168.155.131:9000。已經tried 0次

在數據節點日誌下獲得兩個datanodes下名稱節點的重試錯誤,而我試圖ping和連接Namenode沒有錯誤。

Below is the log for datanode, 

2015-11-14 19:54:22,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting DataNode 
STARTUP_MSG: host = dn2.hcluster.com/192.168.155.133 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 1.2.1 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 1 
5:23:09 PDT 2013 
STARTUP_MSG: java = 1.8.0_65 
************************************************************/ 
2015-11-14 19:54:23,447 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2015-11-14 19:54:23,485 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2015-11-14 19:54:23,486 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2015-11-14 19:54:23,486 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 
2015-11-14 19:54:23,876 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2015-11-14 19:54:25,720 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:27,723 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:28,726 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:29,729 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:30,733 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:31,753 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:32,755 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:33,758 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:34,762 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:35,764 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:35,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to nn1.hcluster.com/192.168.155. 
131:9000 failed on local exception: java.net.NoRouteToHostException: No route to host 
     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1118) 
     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) 
     at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source) 
     at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453) 
     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335) 
     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812) 
Caused by: java.net.NoRouteToHostException: No route to host 
     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481) 
     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457) 
     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583) 
     at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205) 
     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1093) 
     ... 16 more 

2015-11-14 19:54:35,952 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down DataNode at dn2.hcluster.com/192.168.155.133 
************************************************************/ 

從Datanode的1和2,NameNode和它的GUI是工作,所有3Desktop能夠與海誓山盟通過引腳或密碼的SSH過溝通。請幫助下NameNode的..

核心的site.xml

<configuration> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://nn01.hcluster.com:9000</value> 
</property> 
</configuration> 

回答

0

確保您的Namenode運行正常。否則,請在/etc/hosts文件中檢查機器IP和主機名。

確保您已在此處添加此主機名稱「nn01.hcluster.com」

相關問題