2014-03-29 31 views
7

我的本地環境:OS X 10.9.2,HBase的0.98.0,Java1.6獲取錯誤,當我運行HBase的外殼

的conf/HBase的-site.xml中

<property> 
    <name>hbase.rootdir</name> 
    <!--<value>hdfs://127.0.0.1:9000/hbase</value> need to run dfs --> 
    <value>file:///Users/apple/Documents/tools/hbase-rootdir/hbase</value> 
</property> 

<property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>/Users/apple/Documents/tools/hbase-zookeeper/zookeeper</value> 
</property> 

的conf /hbase-env.sh

export JAVA_HOME=$(/usr/libexec/java_home -d 64 -v 1.6) 
export HBASE_OPTS="-XX:+UseConcMarkSweepGC" 

而且,當我跑

> list 
在HBase的外殼

,我得到了以下錯誤:

2014-03-29 10:25:53.412 java[2434:1003] Unable to load realm info from SCDynamicStore 
2014-03-29 10:25:53,416 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2014-03-29 10:26:14,470 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts 
2014-03-29 10:26:14,471 WARN [main] zookeeper.ZKUtil: hconnection-0x5e15e68d, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid 
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) 
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) 
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:199) 
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479) 
    at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) 
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83) 
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:857) 
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:662) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513) 
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414) 
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:393) 
    at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:274) 
    at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:183) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513) 
    at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275) 
    at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91) 
    at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178) 
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322) 
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178) 
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182) 
    at org.jruby.java.proxies.ConcreteJavaProxy$2.call(ConcreteJavaProxy.java:48) 
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322) 
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178) 
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182) 
    at org.jruby.RubyClass.newInstance(RubyClass.java:829) 
     ... 
at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.block_2$RUBY$start(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb:185) 
    at Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$block_2$RUBY$start.call(Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$block_2$RUBY$start:65535) 
    at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:112) 
    at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:95) 
    at org.jruby.runtime.Block.yield(Block.java:130) 
    at org.jruby.RubyContinuation.enter(RubyContinuation.java:106) 
    at org.jruby.RubyKernel.rbCatch(RubyKernel.java:1212) 
    at org.jruby.RubyKernel$s$1$0$rbCatch.call(RubyKernel$s$1$0$rbCatch.gen:65535) 
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322) 
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178) 
    at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:187) 
    at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.method__5$RUBY$start(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb:184) 
    at Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$method__5$RUBY$start.call(Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$method__5$RUBY$start:65535) 
    at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:203) 
    at org.jruby.internal.runtime.methods.CompiledMethod.call(CompiledMethod.java:255) 
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) 
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) 
    at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.__file__(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb:190) 
    at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.load(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb) 
    at org.jruby.Ruby.runScript(Ruby.java:697) 
    at org.jruby.Ruby.runScript(Ruby.java:690) 
    at org.jruby.Ruby.runNormally(Ruby.java:597) 
    at org.jruby.Ruby.runFromMain(Ruby.java:446) 
    at org.jruby.Main.doRunFromMain(Main.java:369) 
    at org.jruby.Main.internalRun(Main.java:258) 
    at org.jruby.Main.run(Main.java:224) 
    at org.jruby.Main.run(Main.java:208) 
    at org.jruby.Main.main(Main.java:188) 
2014-03-29 10:28:21,137 ERROR [main] client.HConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase 

ERROR: KeeperErrorCode = ConnectionLoss for /hbase 

而且我的/ etc/hosts如下權利:

127.0.0.1 localhost 
255.255.255.255 broadcasthost 
::1    localhost 
fe80::1%lo0 localhost 
127.0.0.1 activate.adobe.com 
127.0.0.1 practivate.adobe.com 
127.0.0.1 ereg.adobe.com 
127.0.0.1 activate.wip3.adobe.com 
127.0.0.1 wip3.adobe.com 
127.0.0.1 3dns-3.adobe.com 
127.0.0.1 3dns-2.adobe.com 
127.0.0.1 adobe-dns.adobe.com 
127.0.0.1 adobe-dns-2.adobe.com 
127.0.0.1 adobe-dns-3.adobe.com 
127.0.0.1 ereg.wip3.adobe.com 
127.0.0.1 activate-sea.adobe.com 
127.0.0.1 wwis-dubc1-vip60.adobe.com 
127.0.0.1 activate-sjc0.adobe.com 
127.0.0.1 adobe.activate.com 
127.0.0.1 209.34.83.73:443 
127.0.0.1 209.34.83.73:43 
127.0.0.1 209.34.83.73 
127.0.0.1 209.34.83.67:443 
127.0.0.1 209.34.83.67:43 
127.0.0.1 209.34.83.67 
127.0.0.1 ood.opsource.net 
127.0.0.1 CRL.VERISIGN.NET 
127.0.0.1 199.7.52.190:80 
127.0.0.1 199.7.52.190 
127.0.0.1 adobeereg.com 
127.0.0.1 OCSP.SPO1.VERISIGN.COM 
127.0.0.1 199.7.54.72:80 
127.0.0.1 199.7.54.72 
+1

你確定你提供的目錄是可寫的嗎? – Pavan

+0

你的HMaster守護進程沒有運行。 – Pavan

+0

@Pavan HMaster?那是什麼.. –

回答

0

我在這個問題了。

如果您嘗試獨立使用,只能使用hbase庫並從您的庫中刪除hadoop並使用hbase.hadoop庫。

+2

我該怎麼做?只需刪除整個hadoop文件夾? –

+0

需要幫助,sos ... –

+0

我正在使用此路徑中的庫: hbase-094.16/lib/* - 然後在終端(我使用CentOs)中運行hbase。 – Mark

3

正如您的hbase-site.xml所說 - 您已嘗試在hdfs上運行hbase,現在您正試圖在本地文件系統上運行。
解決方案:首先運行hadoop.x.x.x/bin/start-dfs.sh,然後運行hbase.x.x.x/bin/start-hbase.sh。它現在將按照預期在本地文件系統上運行。

+0

仍然得到相同的錯誤...實際上我想在獨立模式下運行Hbase並使用java api在沒有Hadoop的情況下連接Hbase,您是否知道我該怎麼做?非常感謝你。 –

+0

可以從日誌目錄中發佈hbase -...- local.log的輸出。它將位於您的hbase.x.x.x目錄中。 –

+0

這裏是我的日誌文件:https://docs.google.com/file/d/0BxtBre5A8J61SWRsclE2dnQzdVk/edit –

11

我也遇到了同樣的問題,並且掙扎了很長時間。在instructions here之後,在運行./bin/hbase shell命令之前,應先使用./bin/start-hbase.sh。然後我的問題解決了。

-1

我也面臨這個問題,後來得到的結論

當我寫start-hbase.sh直接進HDFS外殼其示值誤差「無命令」。

然後,我導航到hbase bin文件夾cd/usr/local/hbase/bin併發出命令./start-hbase.sh。它開始工作(動物園管理員和主服務發現運行)。

也爲HBase的外殼,首先你需要輸入HBase的bin文件夾,然後鍵入./hbase殼

希望這個作品:)

0

我面對這個問題時,我沒有加我的主機名/etc/hosts文件。

例如,我的主機名是node1

add 127.0.0.1 node1 in /etc/hosts