2015-12-02 67 views
2

我是新來的Hadoop,HDFS ..我有做下一個步驟:錯誤執行HDFS的NameNode -format

我已經在三個namenodes開始飼養員:

*[email protected]:~$ zkServer.sh start 

我可以看到狀態:

*[email protected]:~$ zkServer.sh status 

結果狀態:

JMX enabled by default 
    Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg 
    Mode: follower 

與JPS命令只出現JPS,有時出現quaroom太:

*[email protected]:~$ jps 
    2237 Jps 

我運行下一個命令了。

* [email protected]:~$ hdfs zkfc -formatZK 

當我執行下,它失敗:

[email protected]:~$ hdfs namenode -format 


15/12/02 10:54:34 WARN client.QuorumJournalManager: Waited 59880 ms (timeout=60000 ms) for a response for hasSomeData. No responses yet. 
15/12/02 10:54:34 WARN namenode.NameNode: Encountered exception during format: 
java.io.IOException: Timed out waiting for response from loggers 
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:228) 
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899) 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:940) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1382) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507) 
15/12/02 10:54:37 FATAL namenode.NameNode: Failed to start namenode. 
java.io.IOException: Timed out waiting for response from loggers 
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:228) 
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899) 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:940) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1382) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507) 
15/12/02 10:54:38 INFO util.ExitUtil: Exiting with status 1 
15/12/02 10:54:39 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1 
************************************************************/ 

然後,如果我跑:

*[email protected]:~$ hdfs namenode -bootstrapStandby 

我有一個錯誤:

15/12/02 10:58:45 FATAL namenode.NameNode: Failed to start namenode. 
java.io.IOException: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: :0 
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:425) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1507) 
Caused by: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: :0 
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212) 
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) 
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) 
    at org.apache.hadoop.hdfs.DFSUtil.substituteForWildcardAddress(DFSUtil.java:1171) 
    at org.apache.hadoop.hdfs.DFSUtil.getInfoServerWithDefaultHost(DFSUtil.java:1138) 
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.parseConfAndFindOtherNN(BootstrapStandby.java:397) 
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:103) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:420) 
    ... 2 more 
15/12/02 10:58:46 INFO util.ExitUtil: Exiting with status 1 
15/12/02 10:58:46 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1 
************************************************************/ 

任何對這個錯誤的幫助對我來說都是一個很大的幫助。

我的配置是下一個:

的.bashrc

###JAVA CONFIGURATION### 
JAVA_HOME=/usr/lib/jvm/java-8-oracle 
export PATH=$PATH:$JAVA_HOME/bin 

###HADOOP CONFIGURATION### 
HADOOP_PREFIX=/opt/hadoop-2.6.2/ 
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin 

###ZOOKEPER### 
export PATH=$PATH:/opt/zookeeper-3.4.6/bin 

HDFS-site.xml中

<configuration> 
<property> 
    <name>dfs.replication</name> 
    <value>2</value> 
</property> 
<property> 
    <name>dfs.name.dir</name> 
    <value>file:///hdfs/name</value> 
</property> 
<property> 
    <name>dfs.data.dir</name> 
    <value>file:///hdfs/data</value> 
</property> 
<property> 
    <name>dfs.permissions</name> 
    <value>false</value> 
</property> 
<property> 
    <name>dfs.nameservices</name> 
    <value>auto-ha</value> 
</property> 
<property> 
    <name>dfs.ha.namenodes.auto-ha</name> 
    <value>nn01,nn02</value> 
</property> 
<property> 
    <name>dfs.namenode.rpc-address.auto-ha.nn01</name> 
    <value>172.16.8.191:8020</value> 
</property> 
<property> 
    <name>dfs.namenode.http-address.auto-ha.nn01</name> 
    <value>172.16.8.191:50070</value> 
</property> 
<property> 
    <name>dfs.namenode.rpc-address.auto-ha.nn02</name> 
    <value>172.16.8.192:8020</value> 
</property> 
<property> 
    <name>dfs.namenode.http-address.auto-ha.nn02</name> 
    <value>172.16.8.192:50070</value> 
</property> 
<property> 
    <name>dfs.namenode.shared.edits.dir</name> 
    <value>qjournal://172.16.8.191:8485;172.16.8.192:8485;172.16.8.193:8485/auto-ha</value> 
</property> 
<property> 
    <name>dfs.journalnode.edits.dir</name> 
    <value>/hdfs/journalnode</value> 
</property> 
<property> 
    <name>dfs.ha.fencing.methods</name> 
    <value>sshfence</value> 
</property> 
<property> 
    <name>dfs.ha.fencing.ssh.private-key-files</name> 
    <value>/home/vagrant/.ssh/id_rsa</value> 
</property> 
<property> 
    <name>dfs.ha.automatic-failover.enabled.auto-ha</name> 
    <value>true</value> 
</property> 
<property> 
    <name>ha.zookeeper.quorum</name> 
    <value>172.16.8.191:2181,172.16.8.192:2181,172.16.8.193:2181</value> 
</property> 
</configuration> 

核心的site.xml

<configuration> 
<property> 
    <name>fs.default.name</name> 
    <value>hdfs://auto-ha</value> 
</property> 
</configuration> 

zoo.cfg

tickTime=2000 

     dataDir=/opt/ZooData 

     clientPort=2181 

     initLimit=5 
     syncLimit=2 
     server.1=172.16.8.191:2888:3888 
     server.2=172.16.8.192:2888:3888 
     server.3=172.16.8.193:2888:3888 

回答

0

在文件HDFS-site.xml中:

*我有改變的所有IP地址的計算機的名稱。例如:172.16.8.191 - > machine_Name1

然後在文件etc/hosts文件:

*我已經添加的所有IP地址與各自的名稱

而現在它是可以正常使用。

相關問題