2017-08-02 106 views
0

我正在使用火花流和卡夫卡,我得到了這個錯誤。apache spark流kafka集成錯誤JAVA

線程「流式啓動」中的異常java.lang.NoSuchMethodError:scala.Predef $ .ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;在org.apache.spark.streaming.kafka010.DirectKafkaInputDStream $$ anonfun $ start $ 1.apply(DirectKafkaInputDStream.scala:246) (TraversableLike.scala:244) at scala.collection.TraversableLike $$ anonfun $ map $ 1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike $$ anonfun $ map $ 1.apply(TraversableLike.scala:244) (Iterator.Scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.IterableLike $ class.foreach(IterableLike.scala: 72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.Traversabl eLike $ class.map(TraversableLike.scala:244) 在scala.collection.mutable.AbstractSet.scala $收集$ SetLike $$超$圖(Set.scala:45) 在scala.collection.SetLike $ class.map (SetLike.scala:93) at scala.collection.mutable.AbstractSet.map(Set.scala:45) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:245) at org .apache.spark.streaming.DStreamGraph $$ anonfun $ start $ 5.apply(DStreamGraph.scala:49) at org.apache.spark.streaming.DStreamGraph $$ anonfun $ start $ 5.apply(DStreamGraph.scala:49) 在scala.collection.parallel.mutable.ParArray $ ParArrayIterator.foreach_quick(ParArray.scala:145) 在scala.collection.parallel.mutable.ParArray $ ParArrayIterator.foreach(ParArray.scala:138) 在scala.collection.parallel.ParIterableLike $ Foreach.leaf(ParIterableLike.scala:975) 在scala.collection.parallel.Task $$ anonfun $ tryLeaf $ 1.適用$ MCV $ SP(Tasks.scala:54) 在scala.collection.parallel.Task $$ anonfun $ tryLeaf $ 1.適用(Tasks.scala:53) 在scala.collection.parallel.Task $$ anonfun $ tryLeaf $ 1.適用(Tasks.scala:53) 在斯卡拉。 collection.parallel.Task $ class.tryLeaf(Tasks.scala:56) 在scala.collection.parallel.ParIterableLike $ Foreach.tryLeaf(ParIterableLike.scala:972) 在scala.collection.parallel.AdaptiveWorkStealingTasks $ WrappedTask $類。計算(Tasks.scala:165) 在scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks $ WrappedTask.compute(Tasks.scala:514) 在scala.concurrent.forkjoin.RecursiveAction.exec(岑參siveAction.java:160) 在scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 在scala.concurrent.forkjoin.ForkJoinPool $ WorkQueue.runTask(ForkJoinPool.java:1339) 在scala.concurrent。 forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 在scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 17/08/02 16時24分58秒信息的StreamingContext:開始的StreamingContext

<dependency> 
      <groupId>org.apache.spark</groupId> 
      <artifactId>spark-sql_2.10</artifactId> 
      <version>2.1.0</version> 
     </dependency> 
     <dependency> 
      <groupId>com.google.code.gson</groupId> 
      <artifactId>gson</artifactId> 
      <version>2.8.0</version> 
     </dependency> 
     <dependency> 
      <groupId>com.googlecode.json-simple</groupId> 
      <artifactId>json-simple</artifactId> 
      <version>1.1.1</version> 
     </dependency> 
     <dependency> 
      <groupId>org.mongodb</groupId> 
      <artifactId>mongo-java-driver</artifactId> 
      <version>3.4.0</version> 
     </dependency> 
     <dependency> 
      <groupId>org.apache.spark</groupId> 
      <artifactId>spark-streaming-kafka-0-10_2.11</artifactId> 
      <version>2.2.0</version> 
     </dependency> 
     <dependency> 
      <groupId>org.apache.spark</groupId> 
      <artifactId>spark-core_2.10</artifactId> 
      <version>2.1.0</version> 
     </dependency> 
     <dependency> 
      <groupId>org.apache.spark</groupId> 
      <artifactId>spark-streaming_2.10</artifactId> 
      <version>2.1.1</version> 
     </dependency> 
    <dependency> 
     <groupId>org.apache.kafka</groupId> 
     <artifactId>kafka-clients</artifactId> 
     <version>0.10.0.0</version> 
    </dependency> 

我的代碼:

使用Kafka_2.11-0.11.0.0

IM我試圖尋找這個問題,但是我無法找到培訓相關jar.Please幫我解決這個問題。

回答

2

你混合斯卡拉2.10和Scala代碼2.11。在Scala 2.10中使用Kafka依賴,或者在Scala 2.11中使用Spark。

+0

我不明白我需要組織的地方。 它是代碼還是maven? –

+0

@TalhaK。這個maven dep:'spark-streaming-kafka-0-10_2.11'必須是'spark-streaming-kafka-0-10_2.10',就像其他火花依賴關係一樣 – maasg

+0

是的!它運行! thnx很多先生@maasg。但我不能在卡夫卡寫出價值。 stream.foreachRDD如何打印值? –

相關問題