2016-11-09 37 views
0

我可以開始的hadoop sucess但數據節點[從屬]無法連接名稱節點[主][從屬]運行,但連接名稱節點[主]

2016-11-09 16:00:15,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.101:9000 
2016-11-09 16:00:21,957 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.101:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-11-09 16:00:22,965 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.101:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 

詳細的/ etc /主機

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1   localhost localhost.localdomain localhost6 localhost6.localdomain6 

192.168.1.101 master 
192.168.1.102 slave1 

核心的site.xml

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://master:9000</value> 
</property> 
</configuration> 

和HDFS-site.xml中

<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 

<property> 
<name>dfs.namenode.name.dir</name> 
<value>file:///opt/volume/namenode</value> 
</property> 

<property> 
<name>dfs.datanode.data.dir</name> 
<value>file:///opt/volume/datanode</value> 
</property> 
+0

下你能分享你的/ etc/hosts中 –

+0

不要緊,我感謝幫助 – ThLez

+0

我是編輯,現在你可以幫我 – ThLez

回答

0

1)檢查是否防火牆限制端口

sudo iptables -L 

如果是這樣,沖洗它

要打開9000,

$ sudo iptables -A INPUT -p tcp -m tcp --dport 9000 -j ACCEPT 
$ sudo /etc/init.d/iptables save 

2)檢查任何問題的namenode日誌小號/var/log/hadoop

+0

'在org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run( BPServiceActor.java:802) \t at java.lang.Thread.run(Thread.java:745)' – ThLez

+0

我應該禁用防火牆嗎? – ThLez

+0

也許防火牆是連接到主9000端口 –

相關問題