2014-12-24 36 views
0

我正在運行多節點羣集;我使用Hadoop的1.0.3(兩個),HBase的-0.94.2(兩個)和動物園管理員-3.4.6(僅主機)Hbase連接問題並未能創建表

主:192.168.0.1 奴:192.168.0.2

Hbase未完美運行,我在嘗試在hbase 上創建表時遇到了問題,當然,我無法訪問http://master:60010上的HBase狀態UI,請幫助!

這裏是我的所有配置文件:

(Hadoop的CONF)核心的site.xml:(對主機和從機相同的配置)

<configuration> 
<property> 
    <name>fs.default.name</name> 
    <value>hdfs://localhost:54310</value> 
</property> 
</configuration> 

(HBase的CONF)HBase的-site.xml中:

<configuration> 

<property> 
     <name>hbase.rootdir</name> 
     <value>hdfs://master:54310/hbase</value> 
</property> 

<property> 
     <name>hbase.cluster.distributed</name> 
     <value>true</value> 
</property> 

<property> 
     <name>hbase.zookeeper.quorum</name> 
     <value>master,slave</value> 
</property> 

<property> 
     <name>hbase.zookeeper.property.clientPort</name> 
     <value>2222</value> 
</property> 

<property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>/usr/local/hadoop/zookeeper</value> 
</property> 

</configuration> 

/etc/hosts和:

192.168.0.1 master 
192.168.0.2 slave 

regionservers:

master 
slave 

這裏是日誌文件:HBase的-hduser-RegionServer的-MASTER.LOG

2014-12-24 02:12:13,190 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect 
java.net.NoRouteToHostException: No route to host 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286) 
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035) 
2014-12-24 02:12:14,002 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server master/192.168.0.1:2181 
2014-12-24 02:12:14,003 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 
2014-12-24 02:12:14,004 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to master/192.168.0.1:2181, initiating session 
2014-12-24 02:12:14,005 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect 
2014-12-24 02:12:14,675 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020 
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server master,60020,1419415915643: Initialization of RS failed. Hence aborting RS. 
java.io.IOException: Received the shutdown message while waiting. 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.blockAndCheckIfStopped(HRegionServer.java:623) 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:598) 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:560) 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:669) 
    at java.lang.Thread.run(Thread.java:745) 
2014-12-24 02:12:14,676 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [] 
2014-12-24 02:12:14,676 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Initialization of RS failed. Hence aborting RS. 
2014-12-24 02:12:14,683 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Registered RegionServer MXBean 
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=Thread[Thread-5,5,main] 
2014-12-24 02:12:14,689 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown hook 
2014-12-24 02:12:14,690 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs shutdown hook thread. 
2014-12-24 02:12:14,691 INFO org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook finished. 

回答

0

我覺得不是localhostcore-site.xml文件使用master

並將從節點主機添加到hadoop目錄中的slave文件中。

而且兩者主&從節點core-site.xml文件是這樣的:如果您在同時存在飼養員

<configuration> 
<property> 
    <name>fs.default.name</name> 
    <value>hdfs://master:54310</value> 
</property> 
</configuration> 

主機和從機的主機應該是存在於兩個regionservers文件。

+0

我應該在修改後做namenode -format嗎? –

+0

如果你願意,你可以格式化namenode,但你將失去所有的hdfs保存的數據。 –