2015-11-30 140 views
1

我想使用3個namenode機器,2個datanodes機器和1臺客戶機運行hdfs。執行hdfs zkfc命令時出錯

當我執行hdfs zkfc –formatZK 我有下一個致命錯誤,我不知道爲什麼,因爲我有嘗試羣集其他時間,它的工作原理,但現在它不工作。

16/01/21 15:05:14 INFO zookeeper.ZooKeeper: Session: 0x25264b6c3d90000 closed 
16/01/21 15:05:14 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x25264b6c3d90000 
16/01/21 15:05:14 INFO zookeeper.ClientCnxn: EventThread shut down 
16/01/21 15:05:14 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now 
org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK 
    at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251) 
    at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214) 
    at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61) 
    at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172) 
    at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168) 
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) 
    at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168) 
    at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181) 

我可以沒有任何錯誤運行的下一個命令:

namenode1$ `hadoop-daemon.sh start journalnode` 
namenode2$ `hadoop-daemon.sh start journalnode` 
namenode3$ `hadoop-daemon.sh start journalnode` 

namenode1$ `hadoop namenode -format` 
namenode1$ `hadoop-daemon.sh start namenode` 

namenode2$ `hadoop namenode -bootstrapStandby` 
namenode2$ `hadoop-daemon.sh start namenode` 

namenode1$ `hadoop-daemon.sh start zkfc` 
namenode2$ `hadoop-daemon.sh start zkfc` 
namenode3$ `hadoop-daemon.sh start zkfc` 

但是,當我使用namenode1去的網頁:50070看起來像待機和namenode2:50070對。 我嘗試使用hdfs haadmin -getServiceState nn01與nn01和nn02,但都顯示爲待機狀態。

我的配置是下一個:

etc/hosts中

127.0.0.1 localhost 
172.16.8.191 name1 
172.16.8.192 name2 
172.16.8.193 name3 
172.16.8.202 data1 
172.16.8.203 data2 
172.16.8.204 client1 

zoo.cfg

tickTime=2000 

     dataDir=/opt/ZooData 

     clientPort=2181 

     initLimit=5 
     syncLimit=2 
     server.1=172.16.8.191:2888:3888 
     server.2=172.16.8.192:2888:3888 
     server.3=172.16.8.193:2888:3888 

核心的site.xml

<?xml version="1.0" encoding="UTF-8"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<configuration> 
<property> 
    <name>fs.default.name</name> 
    <value>hdfs://auto-ha</value> 
</property> 
</configuration> 

HDFS現場。 xml

<configuration> 
    <property> 
     <name>dfs.replication</name> 
     <value>2</value> 
    </property> 
    <property> 
     <name>dfs.name.dir</name> 
     <value>file:///hdfs/name</value> 
    </property> 
    <property> 
     <name>dfs.data.dir</name> 
     <value>file:///hdfs/data</value> 
    </property> 
    <property> 
     <name>dfs.permissions</name> 
     <value>false</value> 
    </property> 
    <property> 
     <name>dfs.nameservices</name> 
     <value>auto-ha</value> 
    </property> 
    <property> 
     <name>dfs.ha.namenodes.auto-ha</name> 
     <value>nn01,nn02</value> 
    </property> 
    <property> 
     <name>dfs.namenode.rpc-address.auto-ha.nn01</name> 
     <value>name1:8020</value> 
    </property> 
    <property> 
     <name>dfs.namenode.http-address.auto-ha.nn01</name> 
     <value>name1:50070</value> 
    </property> 
    <property> 
     <name>dfs.namenode.rpc-address.auto-ha.nn02</name> 
     <value>name2:8020</value> 
    </property> 
    <property> 
     <name>dfs.namenode.http-address.auto-ha.nn02</name> 
     <value>name2:50070</value> 
    </property> 
    <property> 
     <name>dfs.namenode.shared.edits.dir</name> 
     <value>qjournal://name1:8485;name2:8485;name3:8485/auto-ha</value> 
    </property> 
    <property> 
     <name>dfs.journalnode.edits.dir</name> 
     <value>/hdfs/journalnode</value> 
    </property> 
    <property> 
     <name>dfs.ha.fencing.methods</name> 
     <value>sshfence</value> 
    </property> 
    <property> 
     <name>dfs.ha.fencing.ssh.private-key-files</name> 
     <value>/home/vagrant/.ssh/id_rsa</value> 
    </property> 
    <property> 
     <name>dfs.ha.automatic-failover.enabled.auto-ha</name> 
     <value>true</value> 
    </property> 
    <property> 
     <name>ha.zookeeper.quorum</name> 
     <value>name1:2181,name2:2181,name3:2181</value> 
    </property> 
</configuration> 
+0

dfs.ha.automatic-failover.enabled真正啓用此屬性? –

+0

你好,我有這個屬性在hdfs-site.xml中,你想說什麼,如果這個屬性被啓用?謝謝@ravindra –

+0

它應該在namenode中啓用。請參閱https://googleweblight.com/?lite_url=https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html&ei=上的分步流程P3rdOloY&LC = EN-IN&S = 1&M = 506&TS = 1448893230&簽名= ALL1Aj4CGuPEaaCCyBWltwK64x03FsIV1A –

回答

1

當我從microsoft-word複製命令「hdfs zkfc -formatZK」,該行比要放入終端的命令的實際行更長。

Word command: hdfs zkfc –formatZK 

Real command: hdfs zkfc -formatZK 
0

在zoo.cfg您的動物園管理員CLIENTPORT = 2181

而且在HDFS現場你給了端口3000(嘗試更改爲2181)

<property> 
     <name>ha.zookeeper.quorum</name> 
     <value>172.16.8.191:3000,172.16.8.192:3000</value> 
    </property> 
+0

對不起@Rajendra我沒有更新配置,現在它已更新。 –