由於很多錯誤,我無法弄清楚爲什麼它沒有將datanode奴隸vm連接到我的主vm。任何建議是受歡迎的,所以我可以嘗試。 並開始,其中之一就是這個錯誤在我的奴隸VM日誌:Hadoop Datanode奴隸沒有連接到我的主人
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:9000
正因爲如此,我不能運行我想在我的主VM的工作:
hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5
這給我這個錯誤
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/ubuntu/QuasiMonteCarlo_1386793331690_1605707775/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
即使如此,該hdfs dfsadmin -report
(在主機VM)給了我所有的0
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Datanodes available: 0 (0 total, 0 dead)
爲此,我構建了openstack 3 vms ubuntu,一個用於master和其他奴隸。 中高手,它在etc/hosts
127.0.0.1 localhost
50.50.1.9 ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8
50.50.1.8 slave1
50.50.1.4 slave2
核心的site.xml建立
<name>fs.default.name</name>
<value>hdfs://ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:9000</value>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop-2.2.0/tmp</value>
HDFS-site.xml中
<name>dfs.replication</name>
<value>3</value>
<name>dfs.namenode.name.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/namenode</value>
<name>dfs.datanode.data.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/datanode</value>
<name>dfs.permissions</name>
<value>false</value>
mapred-site.xml中
<name>mapreduce.framework.name</name>
<value>yarn</value>
A nd我的奴隸vm文件包含每一行:slave1和slave2。
所有的主 VM日誌中沒有錯誤,但是當我使用從虛擬機,它給出了錯誤連接。和節點管理器給了我太多的錯誤日誌中:
Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.ConnectException: Call From ubuntu-e6df65dc-bf95-45ca-bad5-f8ddcc272b76/50.50.1.8 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused;
從我從機: 核心的site.xml
<name>fs.default.name</name>
<value>hdfs://ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8:9000</value>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadoop-2.2.0/tmp</value>
HDFS-site.xml中
<name>dfs.namenode.name.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/ubuntu/hadoop-2.2.0/etc/hdfs/datanode</value>
和我的/ etc/hosts
127.0.0.1 localhost
50.50.1.8 ubuntu-e6df65dc-bf95-45ca-bad5-f8ddcc272b76
50.50.1.9 ubuntu-378e53c1-3e1f-4f6e-904d-00ef078fe3f8
的JPS 主
15863 ResourceManager
15205 SecondaryNameNode
14967 NameNode
16194 Jps
從屬
1988 Jps
1365 DataNode
1894 NodeManager
「從ubuntu-e6df65dc-bf95-45ca-bad5-f8ddcc272b76/50.50.1.8調用到0.0.0.0:8031失敗」 - 爲什麼它試圖連接0.0.0.0? – Suman
它應該連接到50.50.1.9? – fsi