2016-12-16 248 views
0

我無法編譯scala代碼,彈出依賴關係錯誤!依賴不解決

但我還有一個示例應用程序的正常工作與現有的設置,此代碼不能在同一拱工作

:錯誤日誌

:sbt compile 
[info] Set current project to firstScalaScript (in build file:/home/abu/Current%20Workspace/) 
[info] Updating {file:/home/abu/Current%20Workspace/}current-workspace... 
[info] Resolving org.apache.spark#spark-core;2.0.1 ... 
[warn] module not found: org.apache.spark#spark-core;2.0.1 
[warn] ==== local: tried 
[warn] /home/abu/.ivy2/local/org.apache.spark/spark-core/2.0.1/ivys/ivy.xml 
[warn] ==== public: tried 
[warn] https://repo1.maven.org/maven2/org/apache/spark/spark-core/2.0.1/spark-core-2.0.1.pom 
[warn] ==== Akka Repository: tried 
[warn] http://repo.akka.io/releases/org/apache/spark/spark-core/2.0.1/spark-core-2.0.1.pom 
[info] Resolving org.fusesource.jansi#jansi;1.4 ... 
[warn] :::::::::::::::::::::::::::::::::::::::::::::: 
[warn] ::   UNRESOLVED DEPENDENCIES   :: 
[warn] :::::::::::::::::::::::::::::::::::::::::::::: 
[warn] :: org.apache.spark#spark-core;2.0.1: not found 
[warn] :::::::::::::::::::::::::::::::::::::::::::::: 
[warn] 
[warn] Note: Unresolved dependencies path: 

我的代碼:

import scala.io.Source._ 
import org.apache.spark.SparkContext 
import org.apache.spark.SparkContext._ 
import org.apache.spark.SparkConf 
import org.apache.log4j.Logger 
import org.apache.log4j.Level 
import org.apache.spark.rdd.RDD 
import org.apache.hadoop.io.compress.GzipCodec 

object firstScalaScript{ 
    def main(args: Array[String]) 
    { 
     val sc=new SparkContext(new SparkConf()) 
     val rdd=sc.textFile("e.txt,r.txt").collect() 
     //rdd.saveAsTextFile("confirmshot.txt"); sc.stop() 
    } 
} 
+0

我的代碼:進口scala.io.Source._ 進口org.apache.spark.SparkContext 進口org.apache.spark.SparkContext._ 進口org.apache.spark.SparkConf 進口org.apache.log4j .Logger 進口org.apache.log4j.Level 進口org.apache.spark.rdd.RDD 進口org.apache.hadoop.io.compress.GzipCodec 對象firstScalaScript { DEF主(參數:數組[String]){ \t val sc = new SparkContext(new SparkConf()) \t val rdd = sc.textFile(「e.txt,r.txt」)。collect() \t //rdd.saveAsTextFile ("confirmshot.txt「); \t sc。stop() \t} } –

+2

請編輯您的問題與您的代碼塊(並設置適當的格式),而不是使用註釋爲同一目的。 –

回答

1

Spark工件(以及更多庫的工件)已打包並分發給不同版本的Scala。爲了區分它們,Scala版本被附加在工件名稱的末尾,例如spark-core_2.10spark-core_2.11

您的spark-core依賴項不完整,因爲它缺少Scala版本。

SBT可以幫助您在構建時將您正在使用的Scala版本追加到工件名稱。您可以添加依賴作爲

"org.apache.spark" %% "spark-core" % "2.0.1" 

,這將轉化爲

"org.apache.spark" % "spark-core_YOUR_SCALA_VERSION" % "2.0.1" 

可以Maven發現這個瓶子的所有細節。請注意,在此頁面中,您可以找到關於如何使用其他工具(如Maven或Gradle)導入庫的建議。

0

Spark依賴在artifactId中具有附加數字 - Scala版本

應該spark-core_2.11例如用於斯卡拉2.11

在SBT它應該是:

// Scala version will be added by SBT while building 
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.2" 

或:

libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.0.2" 

注意:第二個版本不應該與庫使用具有Scala的依賴性,因爲第一個會自動選擇合適的工件。僅在非Scala依賴項下才能使用它

+0

我甚至不會提到第二個,這是一個壞習慣。 – Reactormonk

+0

@Reactormonk是的,這是壞習慣,但它顯示了它是如何工作的。後來我會編輯答案重點:) –

+0

@Reactormonk更改:)感謝您對此的建議:) –