2015-03-19 47 views
2

我有兩臺linux機器,一臺是master machine(192.168.8.174),另一臺是slave machine(192.168.8.173)。我已成功安裝並配置完全分佈式Hadoop 2.6.0。 Hadoop輸出也很完美。我安裝並配置了HBase 1.0。當我開始HBASE像下面錯誤:無法從zookeeper獲取主地址; znode數據== null值

master machine     slave machine 
HMaster       HQuorumpeer       
HQuorumpeer      RegionServer 
HRegionServer 

輸出但當我create table(EXAMPLE:create 'test','cf')它顯示錯誤,如下面的HBase的日誌文件

015-03-19 16:46:04,930 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.8.173/192.168.8.173:2181. Will not attempt to authenticate using SASL (unknown error) 
2015-03-19 16:46:04,952 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.8.173/192.168.8.173:2181, initiating session 
2015-03-19 16:46:04,963 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.8.173/192.168.8.173:2181, sessionid = 0x14c3135d05c0001, negotiated timeout = 90000 
2015-03-19 16:46:04,964 INFO [master/master/192.168.8.174:16020] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 
2015-03-19 16:46:04,992 FATAL [master:16020.activeMasterManager] master.HMaster: Failed to become active master 
java.net.ConnectException: Call From master/192.168.8.174 to master:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1415) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) 
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) 
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447) 
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125) 
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:591) 
    at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165) 
    at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1425) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1382) 
    ... 29 more 
2015-03-19 16:46:05,002 FATAL [master:16020.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. 
java.net.ConnectException: Call From master/192.168.8.174 to master:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1415) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) 
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) 
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447) 
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125) 
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:591) 
    at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165) 
    at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1425) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1382) 
    ... 29 more 
2015-03-19 16:46:05,002 INFO [master:16020.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown. 
2015-03-19 16:46:08,046 INFO [master/master/192.168.8.174:16020] ipc.RpcServer: Stopping server on 16020 
2015-03-19 16:46:08,046 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: stopping 
2015-03-19 16:46:08,047 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped 
2015-03-19 16:46:08,047 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping 
2015-03-19 16:46:08,049 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: Stopping infoServer 
2015-03-19 16:46:08,089 INFO [master/master/192.168.8.174:16020] mortbay.log: Stopped [email protected]:16030 
2015-03-19 16:46:08,191 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: stopping server master,16020,1426754759593 
2015-03-19 16:46:08,191 INFO [master/master/192.168.8.174:16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14c3135d05c0001 
2015-03-19 16:46:08,241 INFO [master/master/192.168.8.174:16020-EventThread] zookeeper.ClientCnxn: EventThread shut down 
2015-03-19 16:46:08,242 INFO [master/master/192.168.8.174:16020] zookeeper.ZooKeeper: Session: 0x14c3135d05c0001 closed 
2015-03-19 16:46:08,244 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: stopping server master,16020,1426754759593; all regions closed. 

所以我不明白是什麼問題

我配置文件爲

主機 - hbase-site.xml

<configuration> 
    <property> 
     <name>hbase.rootdir</name> 
      <value>hdfs://192.168.8.174:54310/hbase</value> 
    </property> 
    <property> 
      <name>hbase.cluster.distributed</name> 
      <value>true</value> 
    </property> 
    <property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>hdfs://192.168.8.174:9002/zookeeper</value> 
    </property> 
    <property> 
     <name>hbase.zookeeper.quorum</name> 
     <value>192.168.8.174,192.168.8.173</value> 
    </property> 
    <property> 
     <name>hbase.zookeeper.property.clientPort</name> 
     <value>2181</value> 
    </property> 
    </configuration> 

從機 - hbase-site.xml

<configuration> 
     <property> 
      <name>hbase.rootdir</name> 
      <value>hdfs://192.168.8.174:54310/hbase</value> 
     </property> 
     <property> 
       <name>hbase.cluster.distributed</name> 
       <value>true</value> 
     </property> 

     </configuration> 

和我啓用HBASE_MANAGES_ZK是真的在hbase-env.sh

回答

0

從日誌消息,它看起來像你可能有名稱解析的問題。

我會確保您的IP地址在向前和反向方向上正確解析爲相同的主機名。這是HBase常見的問題。特別是,我會檢查您的/etc/hosts文件,並確保名稱master與IP地址192.168.8.174沒有關聯。如果是,那麼您需要在配置中使用正確的名稱而不是IP地址。另外,確保羣集中所有計算機上的名稱映射都相同。有做這個檢查你的工具,例如:

https://github.com/sujee/hadoop-dns-checker

更新:看起來你也可能有hbase.zookeeper.property.dataDir不好設置。您目前已將它指向HDFS網址,但我相信這應該是本地目錄路徑。以here爲例。

我會確認你甚至可以使用hbase zkcli從命令行與動物園管理員交談。

+0

嗨,謝謝你的回覆。我的主機和從機'/ etc/hosts'配置爲 '192.168.8.174主機 192.168.8.173 slave1'。 請幫我解決這個問題。提前感謝。 – 2015-03-20 01:01:19

+0

您是否還有將'master'映射到'127.0.0.1'或'127.0.1.1'的條目?那些可以混淆HBase。對於大型集羣,「/ etc/hosts」不是一個好的解決方案。我會建議設置適當的DNS條目。 – b4hand 2015-03-20 01:50:31

+0

對不起,我不明白你在上面的命令是什麼意思,但在我的兩臺機器主機文件沒有'localhost或127.0.0.1'。只有我提到'192.168.8.174 master 192.168.8.173 slave1'。給一些詳細的解決方案 – 2015-03-20 06:05:08

4

我得到了ERROR: Can't get master address from ZooKeeper; znode data == null一次。在我的情況下,這是zookeeper.znode.parent值的配置。服務器上的值爲/hbase,但只有在客戶端設置爲/hbase-unsecure時才能連接。必須在服務器的zoo.cfg文件上編輯該值,以便客戶端連接到該文件。

相關問題