2014-06-17 60 views
3

我正在嘗試一對Linux服務器上的雙節點hbase羣集。所有文件都被轉移過來,並且服務器上有一個正在運行的hadoop集羣,但hbase仍然拒絕完全工作。動物園管理員和地區服務器正常啓動,我甚至可以使用shell,但主人拒絕啓動。對於主日誌給出的理由是:Hbase主服務器無法構建

2014-06-17 14:56:43,678 ERROR [main] master.HMasterCommandLine: Master exiting 
java.lang.RuntimeException: Failed construction of Master: class org.apache.hado 
op.hbase.master.HMaster 
     at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2 
785) 
     at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMaster 
CommandLine.java:184) 
     at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandL 
ine.java:134) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
     at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLi 
ne.java:126) 
     at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2799) 
Caused by: java.net.UnknownHostException: hadoop-namenode 
     at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUti 
l.java:418) 
     at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxi 
es.java:231) 
     at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.ja 
va:139) 
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510) 
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi 
leSystem.java:136) 
     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433 
) 
     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88) 
     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:246 
7) 
     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449) 
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) 
     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287) 
     at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:883) 
     at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:459) 
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 

     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct 
orAccessorImpl.java:39) 
     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC 
onstructorAccessorImpl.java:27) 
     at java.lang.reflect.Constructor.newInstance(Constructor.java:513) 
     at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2 
780) 
     ... 5 more 

我的配置爲我的HBase的-site.xml中被

<configuration> 
<property> 
<name>hbase.master</name> 
<value>master:60000</value> 
</property> 

<property> 
<name>hbase.rootdir</name> 
<value>hdfs://hadoop-namenode:9000/hbase</value> 
</property> 


<property> 
<name>hbase.cluster.distributed</name> 
<value>true</value> 
</property> 



<property> 
    <name>hbase.zookeeper.property.clientPort</name> 
    <value>2222</value> 
</property> 


<property> 
<name>hbase.zookeeper.quorum</name> 
<value>master</value> 
</property> 
</configuration> 

是我的配置問題,或者是它的一些網絡問題?

回答

1

HBase master無法解析「hadoop-namenode」的IP地址。您是否已經在主機中的/etc/hosts文件中添加了「hadoop-namenode」。您可以通過HBase主機中的ping hadoop-namenode輕鬆進行檢查。

+1

我現在對我自己很生氣,我自己並沒有明白這一點。非常感謝。我只是改變了這個價值:9000,現在一切似乎都正常。 – chenab

相關問題