2017-09-21 94 views
1

我試圖運行Logistic迴歸的例子(https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/ml/JavaLogisticRegressionWithElasticNetExample.java爲什麼在IntelliJ IDEA中運行MLlib項目會失敗並顯示「AssertionError:assertion failed:unsafe symbol CompatContext」?

這是代碼:

public final class GettingStarted { 

public static void main(final String[] args) throws InterruptedException { 
    System.setProperty("hadoop.home.dir", "C:\\winutils"); 

    SparkSession spark = SparkSession 
      .builder() 
      .appName("JavaLogisticRegressionWithElasticNetExample") 
      .config("spark.master", "local") 
      .getOrCreate(); 

    // $example on$ 
    // Load training data 
    Dataset<Row> training = spark.read().format("libsvm").load("data/mllib/sample_libsvm_data.txt"); 

    LogisticRegression lr = new LogisticRegression() 
      .setMaxIter(10) 
      .setRegParam(0.3) 
      .setElasticNetParam(0.8); 

    // Fit the model 
    LogisticRegressionModel lrModel = lr.fit(training); 

    // Print the coefficients and intercept for logistic regression 
    System.out.println("Coefficients: " 
      + lrModel.coefficients() + " Intercept: " + lrModel.intercept()); 

    // We can also use the multinomial family for binary classification 
    LogisticRegression mlr = new LogisticRegression() 
      .setMaxIter(10) 
      .setRegParam(0.3) 
      .setElasticNetParam(0.8) 
      .setFamily("multinomial"); 

    // Fit the model 
    LogisticRegressionModel mlrModel = mlr.fit(training); 

    // Print the coefficients and intercepts for logistic regression with multinomial family 
    System.out.println("Multinomial coefficients: " + lrModel.coefficientMatrix() 
      + "\nMultinomial intercepts: " + mlrModel.interceptVector()); 
    // $example off$ 

    spark.stop();}} 

我還使用的例子(https://github.com/apache/spark/blob/master/data/mllib/sample_libsvm_data.txt) 的同一個文件,但我得到這些錯誤:

Exception in thread "main" java.lang.AssertionError: assertion failed: unsafe symbol CompatContext (child of package macrocompat) in runtime reflection universe 
at scala.reflect.internal.Symbols$Symbol.<init>(Symbols.scala:184) 
at scala.reflect.internal.Symbols$TypeSymbol.<init>(Symbols.scala:2984) 
at scala.reflect.internal.Symbols$ClassSymbol.<init>(Symbols.scala:3176) 
at scala.reflect.internal.Symbols$StubClassSymbol.<init>(Symbols.scala:3471) 
at scala.reflect.internal.Symbols$Symbol.newStubSymbol(Symbols.scala:498) 
at scala.reflect.internal.pickling.UnPickler$Scan.readExtSymbol$1(UnPickler.scala:258) 
at scala.reflect.internal.pickling.UnPickler$Scan.readSymbol(UnPickler.scala:284) 
at scala.reflect.internal.pickling.UnPickler$Scan.readSymbolRef(UnPickler.scala:649) 
at scala.reflect.internal.pickling.UnPickler$Scan.readType(UnPickler.scala:417) 
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef$$anonfun$6.apply(UnPickler.scala:725) 
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef$$anonfun$6.apply(UnPickler.scala:725) 
at scala.reflect.internal.pickling.UnPickler$Scan.at(UnPickler.scala:179) 
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef.completeInternal(UnPickler.scala:725) 
at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef.complete(UnPickler.scala:749) 
at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1489) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$12.scala$reflect$runtime$SynchronizedSymbols$SynchronizedSymbol$$super$info(SynchronizedSymbols.scala:162) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anonfun$info$1.apply(SynchronizedSymbols.scala:127) 
at scala.reflect.runtime.Gil$class.gilSynchronized(Gil.scala:19) 
at scala.reflect.runtime.JavaUniverse.gilSynchronized(JavaUniverse.scala:16) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:123) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$12.gilSynchronizedIfNotThreadsafe(SynchronizedSymbols.scala:162) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$class.info(SynchronizedSymbols.scala:127) 
at scala.reflect.runtime.SynchronizedSymbols$SynchronizedSymbol$$anon$12.info(SynchronizedSymbols.scala:162) 
at scala.reflect.internal.Mirrors$RootsBase.ensureClassSymbol(Mirrors.scala:94) 
at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102) 
at scala.reflect.internal.Mirrors$RootsBase.getClassIfDefined(Mirrors.scala:114) 
at scala.reflect.internal.Mirrors$RootsBase.getClassIfDefined(Mirrors.scala:111) 
at scala.reflect.internal.Definitions$DefinitionsClass.BlackboxContextClass$lzycompute(Definitions.scala:496) 
at scala.reflect.internal.Definitions$DefinitionsClass.BlackboxContextClass(Definitions.scala:496) 
at scala.reflect.runtime.JavaUniverseForce$class.force(JavaUniverseForce.scala:305) 
at scala.reflect.runtime.JavaUniverse.force(JavaUniverse.scala:16) 
at scala.reflect.runtime.JavaUniverse.init(JavaUniverse.scala:147) 
at scala.reflect.runtime.JavaUniverse.<init>(JavaUniverse.scala:78) 
at scala.reflect.runtime.package$.universe$lzycompute(package.scala:17) 
at scala.reflect.runtime.package$.universe(package.scala:17) 
at org.apache.spark.sql.catalyst.ScalaReflection$.<init>(ScalaReflection.scala:40) 
at org.apache.spark.sql.catalyst.ScalaReflection$.<clinit>(ScalaReflection.scala) 
at org.apache.spark.sql.catalyst.encoders.RowEncoder$.org$apache$spark$sql$catalyst$encoders$RowEncoder$$serializerFor(RowEncoder.scala:74) 
at org.apache.spark.sql.catalyst.encoders.RowEncoder$.apply(RowEncoder.scala:61) 
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67) 
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:415) 
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:172) 
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:156) 
at GettingStarted.main(GettingStarted.java:95) 

你知道我錯了嗎?

編輯: 我在的IntelliJ運行它,它是一個Maven的項目,我添加了依賴性:

<dependency> 
     <groupId>org.apache.spark</groupId> 
     <artifactId>spark-core_2.11</artifactId> 
     <version>2.2.0</version> 
    </dependency> 
    <dependency> 
     <groupId>org.mongodb.spark</groupId> 
     <artifactId>mongo-spark-connector_2.11</artifactId> 
     <version>2.2.0</version> 
    </dependency> 
    <dependency> 
     <groupId>org.apache.spark</groupId> 
     <artifactId>spark-sql_2.11</artifactId> 
     <version>2.2.0</version> 
    </dependency> 
    <dependency> 
     <groupId>org.apache.spark</groupId> 
     <artifactId>spark-mllib_2.10</artifactId> 
     <version>2.2.0</version> 
    </dependency> 
+0

IntelliJ 這是一個Maven項目 –

回答

2

TL;一旦你開始看到內部的斯卡拉錯誤博士,mentionning反射的宇宙,認爲不兼容的Scala版本。

你的庫中的scala版本不匹配(2.10和2.11)。

你應該對齊你的實際scala版本。

<dependency> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-sql_2.11</artifactId> <!-- This is scala v2.11 --> 
    <version>2.2.0</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-mllib_2.10</artifactId> <!-- This is scala v2.10 --> 
    <version>2.2.0</version> 
</dependency> 
相關問題