0
我使用storm-0.10將數據放入hbase-1.0.1,而storm使用guava-12.0(其中hbase使用guava-18.0),兩者都加載到classpath中,導致我的作業失敗。在類路徑中複製guava.jar
如何確保風暴和hbase使用正確的版本jar?
這裏是我的pom.xml:
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.0.0-cdh5.4.5</version>
<exclusions>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.2.1</version>
<exclusions>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>org.json</artifactId>
<version>2.0</version>
</dependency>
</dependencies>
和異常:
java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:434) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1122) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1109) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1261) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1125) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1513) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107) ~[hbase-client-1.0.0-cdh5.6.0.jar:?]
at com.lujinhong.demo.storm.kinit.stormkinitdemo.HBaseHelper.put(HBaseHelper.java:182) ~[stormjar.jar:?]
at com.lujinhong.demo.storm.kinit.stormkinitdemo.HBaseHelper.put(HBaseHelper.java:175) ~[stormjar.jar:?]
at com.lujinhong.demo.storm.kinit.stormkinitdemo.PrepaidFunction.execute(PrepaidFunction.java:79) ~[stormjar.jar:?]
at storm.trident.planner.processor.EachProcessor.execute(EachProcessor.java:65) ~[storm-core-0.10.0.jar:0.10.0]
at storm.trident.planner.SubtopologyBolt$InitialReceiver.receive(SubtopologyBolt.java:206) ~[storm-core-0.10.0.jar:0.10.0]
at storm.trident.planner.SubtopologyBolt.execute(SubtopologyBolt.java:146) ~[storm-core-0.10.0.jar:0.10.0]
at storm.trident.topology.TridentBoltExecutor.execute(TridentBoltExecutor.java:370) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$fn__5694$tuple_action_fn__5696.invoke(executor.clj:690) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$mk_task_receiver$fn__5615.invoke(executor.clj:436) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.disruptor$clojure_handler$reify__5189.onEvent(disruptor.clj:58) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:132) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:106) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.daemon.executor$fn__5694$fn__5707$fn__5758.invoke(executor.clj:819) ~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:479) [storm-core-0.10.0.jar:0.10.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_67]
我在我的拓撲結構中同時使用storm&hbase,所以guava-12.0應該在cp中用於hbase,番石榴-18.0應該在cp中用於風暴。否則,風暴或hbase不適合工作。 –
我認爲,更正確的方法是簡單地將hbase更新到最新版本(1.2.1) –
我通過將guava-12.0.jar和guava-18.0.jar放入storm/lib中來解決此問題,但我不認爲它是很好的解決方案。 任何爲什麼這兩個jar在classpath中不會導致衝突? –