2015-04-27 102 views
0

形成具有以下機的ip無法形成HA Hadoop集羣中的多宿主網絡

活動NN多節點HA集羣 - 172.16.105 .---

待機NN​​- 172.16.105 .-- -

數據管理部DN-192.168 .---

對於上述結構斜面能夠創建集羣,雖然在格式化名稱節點它拋出異常

15/04/27 16:15:18 INFO namenode.NNConf: Maximum size of an xattr: 16384 
15/04/27 16:15:18 FATAL namenode.NameNode: Exception in namenode join 
java.lang.IllegalArgumentException: Unable to construct journal, qjournal://ActiveNamnode:8485;StandbyNamenod:8485;Datanode:8485/mycluster 
     at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEdit 
Log.java:1555) 
     at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditL 
og.java:267) 
     at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite 
(FSEditLog.java:233) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java: 
920) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo 
de.java:1354) 
     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:14 
73) 
Caused by: java.lang.reflect.InvocationTargetException 
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 

     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct 
orAccessorImpl.java:57) 
     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC 
onstructorAccessorImpl.java:45) 
     at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
     at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEdit 
Log.java:1553) 
     ... 5 more 
Caused by: java.lang.NullPointerException 
     at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getNam 
e(IPCLoggerChannelMetrics.java:107) 
     at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create 
(IPCLoggerChannelMetrics.java:91) 
     at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.<init>(IPCLog 
gerChannel.java:166) 
     at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogge 
r(IPCLoggerChannel.java:146) 
     at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLog 
gers(QuorumJournalManager.java:367) 
     at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLog 
gers(QuorumJournalManager.java:149) 
     at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(Qu 
orumJournalManager.java:116) 
     at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(Qu 
orumJournalManager.java:105) 
     ... 10 more 
15/04/27 16:15:18 INFO util.ExitUtil: Exiting with status 1 
15/04/27 16:15:18 INFO namenode.NameNode: SHUTDOWN_MSG: 

核心-site.xml中

<configuration> 
<property><name>ha.zookeeper.quorum</name>   <value>activenamenode:2181,standbynamenode:2181,slave:2181</value></property  <property> 
<name>fs.defaultFS</name> 
<value>hdfs://myccluster</value> 
</property> 
</configuration> 

HDFS-site.xml中

<configuration> 
<property> 
<name>dfs.datanode.data.dir</name> 
<value>file:/C:/sample/myccluster/meta/Metadata/data/dfs/datanode</value> 
</property> 
<property> 
<name>dfs.namenode.name.dir</name> 
<value>file:/sample/myccluster/meta/Metadata/data/dfs/namenode</value> 
</property> 
<property> 
<name>dfs.nameservices</name> 
<value>myccluster</value> 
<final>true</final> 
</property> 
<property> 
<name>dfs.ha.namenodes.myccluster</name> 
<value>nn1,nn2</value> 
</property> 
<property> 
<name>dfs.ha.namenode.id</name> 
<value>nn1</value> 
</property> 
<property> 
<name>dfs.namenode.rpc-address.myccluster.nn1</name> 
<value>0.0.0.0:9000</value> 
</property> 
<property> 
<name>dfs.namenode.rpc-address.myccluster.nn2</name> 
<value>standbynamenode:9000</value> 
</property> 
<property> 
<name>dfs.namenode.http-address.myccluster.nn1</name> 
<value>0.0.0.0:50070</value> 
</property> 
<property> 
<name>dfs.namenode.http-address.myccluster.nn2</name> 
<value>standbynamenode:50070</value> 
</property> 
<property> 
<name>dfs.namenode.shared.edits.dir</name> 
<value>qjournal://activenamenode:8485;standbynamenode:8485;slave:8485/myccluster</value> 
</property> 
<property> 
<name>dfs.journalnode.edits.dir</name> 
<value>C:\sample\myccluster\meta\Metadata\data\dfs\journal\NamenodeLogs3</value> 
</property> 
<property> 
<name>dfs.ha.automatic-failover.enabled.myccluster</name> 
<value>true</value> 
</property> 
<property> 
<name>dfs.client.failover.proxy.provider.myccluster</name> 
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> 
</property> 
<property> 
<name>dfs.replication</name> 
<value>3</value> 
</property> 
<property> 
<name>dfs.permissions</name> 
<value>false</value> 
</property> 
<property> 
<name>dfs.webhdfs.enabled</name> 
<value>true</value> 
</property> 
<property> 
<name>dfs.ha.fencing.methods</name> 
<value>shell(C:\sample\myccluster\meta\SDK\hadoop\bin\fencing.bat)</value> 
</property> 
<property> 
<name>dfs.hosts.exclude</name> 
<value>/sample/myccluster/meta/Metadata/exclude</value> 
</property> 
</configuration> 

MAPRED-site.xml中

<configuration> 
    <property> 
    <name>mapreduce.framework.name</name> 
    <value>yarn</value> 
    </property> 
    <property> 
    <name>mapreduce.jobhistory.address</name> 
    <value>0.0.0.0:10020</value> 
    </property> 
    <property> 
    <name>mapreduce.jobhistory.webapp.address</name> 
    <value>0.0.0.0:19888</value> 
    </property> 
    </configuration> 
+0

你能張貼您'core-site.xml''hdfs-site.xml'和'mapred-site.xml'? –

+0

你好,我已經更新了confg文件。 – karthik

+0

看來你的機器中有更多的一個IP地址。所以Datanode主機無法從遠程機器訪問。檢查[這](http://stackoverflow.com/a/8083542/3496666) – Kumar

回答

2
+0

你有沒有試過這個?提供端口號可以正常工作。 – Kumar

+0

錯誤是'IllegalArgumentException',我認爲它是由你的core-site.xml引起的,試圖改變它並且設置你的namenode的名字,zookeeper的端口默認爲2181 –

+0

謝謝。我相信這不是問題。由於多個網絡接口而出現問題。 – Kumar