1
我想部署風暴拓撲(版本1.0.0)與Spark依賴(版本1.6.1)。該拓撲通常使用本地羣集工作,但不會將其提交給羣集。我知道Spark和Storm需要與log4j相關的庫。所以,如果POM文件被修改爲:部署與Spark依賴關係的風暴拓撲
<!-- Apache Spark -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
出現此錯誤:
java.lang.NoSuchMethodError: org.apache.log4j.Logger.setLevel(Lorg/apache/log4j/Level;)V
at org.apache.spark.util.AkkaUtils$$anonfun$org$apache$spark$util$AkkaUtils$$doCreateActorSystem$1.apply(AkkaUtils.scala:75) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$$anonfun$org$apache$spark$util$AkkaUtils$$doCreateActorSystem$1.apply(AkkaUtils.scala:75) ~[stormjar.jar:?]
at scala.Option.map(Option.scala:145) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:75) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) ~[stormjar.jar:?]
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988) ~[stormjar.jar:?]
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) ~[stormjar.jar:?]
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979) ~[stormjar.jar:?]
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) ~[stormjar.jar:?]
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) ~[stormjar.jar:?]
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) ~[stormjar.jar:?]
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) ~[stormjar.jar:?]
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) ~[stormjar.jar:?]
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) ~[stormjar.jar:?]
at ufrn.imd.engsoft.storm.SentimentAnalyserBolt.prepare(SentimentAnalyserBolt.java:106) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__8226$fn__8239.invoke(executor.clj:795) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.0.jar:1.0.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]
2016-04-22 00:29:06.285 o.a.s.util [ERROR] Halting process: ("Worker died")
而且沒有在Spark依賴此錯誤的任何例外情況:
java.lang.ExceptionInInitializerError
at org.apache.log4j.Logger.getLogger(Logger.java:39) ~[log4j-over-slf4j-1.6.6.jar:1.6.6]
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:75) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:52) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:55) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:266) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:288) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.SparkContext.<init>(SparkContext.scala:457) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) ~[spark-assembly-1.6.1-hadoop2.6.0.jar:1.6.1]
at ufrn.imd.engsoft.storm.SentimentAnalyserBolt.prepare(SentimentAnalyserBolt.java:106) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$fn__8226$fn__8239.invoke(executor.clj:795) ~[storm-core-1.0.0.jar:1.0.0]
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.0.jar:1.0.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]
Caused by: java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError. See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more details.
at org.apache.log4j.Log4jLoggerFactory.<clinit>(Log4jLoggerFactory.java:49) ~[log4j-over-slf4j-1.6.6.jar:1.6.6]
... 18 more
在這兩種情況下,以上風暴依賴關係如下:
<!-- Apache Storm -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>${storm.version}</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
風暴文件夾中包含這些罐子:
- log4-api-2.1
- log4j-core-2.1
- log4j-over-sl4j-1.6.6
- log4j-sl4j-impl-2.1
- sl4j-api-1.7.7
- sl4j-log4j12
也許星火能夠得到上述的罐子可以解決這個問題,但我沒有找到線索做到這一點。有人可以幫助我解決如何解決這個問題的任何想法?
謝謝!(:
我試着排除只有實施誘惑,但同樣的錯誤仍在繼續。 '2016-04-25 08:24:05.216 STDIO [錯誤] SLF4J:檢測到類路徑上的log4j-over-slf4j.jar和slf4j-log4j12.jar,搶佔StackOverflowError。 2016-04-25 08:24:05.218 STDIO [錯誤] SLF4J:另請參閱http://www.slf4j.org/codes.html#log4jDelegationLoop瞭解更多詳情。 2016-04-25 08:24:05.221 o.a.s.util [錯誤]異步循環死亡! java.lang.ExceptionInInitializerError \t at org.apache.log4j.Logger.getLogger(Logger.java:39)〜[log4j-over-slf4j-1.6.6.jar:1.6.6]'。我相信應該使用風暴文件夾中的庫。 –
也許pom是正確的,因爲我正在將拓撲提交到遠程風暴羣集,因此正在使用風暴文件夾中的庫。但是,由於某些原因,Spark(本地模式)沒有使用該庫。我也嘗試使用火花簇。 –