2016-04-19 12 views
3

我在Windows上安裝的火花,但它無法運行呈現以下錯誤:火花無法在Windows中:<console>:16:錯誤:未發現:價值sqlContext

<console>:16: error: not found: value sqlContext 
     import sqlContext.implicits._ 
       ^
<console>:16: error: not found: value sqlContext 
     import sqlContext.sql 
       ^

我想下面的鏈接,但任何一個他們的解決該問題: How to start Spark applications on Windows (aka Why Spark fails with NullPointerException)?

Apache Spark error while start

error when starting the spark shell

error: not found: value sqlContext

完整的日誌火花執行的是以下:

D:\Spark\spark-1.6.1-bin-hadoop2.6\bin>spark-shell 
    log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). 
    log4j:WARN Please initialize the log4j system properly. 
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 
    Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties 
    To adjust logging level use sc.setLogLevel("INFO") 
    Welcome to 
      ____    __ 
     /__/__ ___ _____/ /__ 
     _\ \/ _ \/ _ `/ __/ '_/ 
     /___/ .__/\_,_/_/ /_/\_\ version 1.6.1 
      /_/ 

    Using Scala version 2.10.5 (Java HotSpot(TM) Client VM, Java 1.8.0_77) 
    Type in expressions to have them evaluated. 
    Type :help for more information. 
    Spark context available as sc. 
    16/04/19 16:28:10 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have mul 
    tiple JAR versions of the same plugin in the classpath. The URL "file:/D:/Spark/spark-1.6.1-bin-hadoop2.6/lib/datanucleus 
    -api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/D:/Sp 
    ark/spark-1.6.1-bin-hadoop2.6/bin/../lib/datanucleus-api-jdo-3.2.6.jar." 
    16/04/19 16:28:10 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JA 
    R versions of the same plugin in the classpath. The URL "file:/D:/Spark/spark-1.6.1-bin-hadoop2.6/lib/datanucleus-core-3. 
    2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/D:/Spark/spark- 
    1.6.1-bin-hadoop2.6/bin/../lib/datanucleus-core-3.2.10.jar." 
    16/04/19 16:28:10 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have 
    multiple JAR versions of the same plugin in the classpath. The URL "file:/D:/Spark/spark-1.6.1-bin-hadoop2.6/bin/../lib/ 
    datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "fi 
    le:/D:/Spark/spark-1.6.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar." 
    16/04/19 16:28:11 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) 
    16/04/19 16:28:11 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) 
    16/04/19 16:28:24 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not 
    enabled so recording the schema version 1.2.0 
    16/04/19 16:28:24 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException 
    java.lang.RuntimeException: java.lang.NullPointerException 
      at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) 
      at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204) 
      at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238) 
      at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:218) 
      at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:208) 
      at org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:462) 
      at org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:461) 
      at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40) 
      at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:330) 
      at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90) 
      at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101) 
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) 
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) 
      at java.lang.reflect.Constructor.newInstance(Unknown Source) 
      at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028) 
      at $iwC$$iwC.<init>(<console>:15) 
      at $iwC.<init>(<console>:24) 
      at <init>(<console>:26) 
      at .<init>(<console>:30) 
      at .<clinit>(<console>) 
      at .<init>(<console>:7) 
      at .<clinit>(<console>) 
      at $print(<console>) 
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
      at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) 
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) 
      at java.lang.reflect.Method.invoke(Unknown Source) 
      at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
      at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
      at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
      at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
      at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
      at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) 
      at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) 
      at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) 
      at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132) 
      at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124) 
      at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) 
      at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124) 
      at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) 
      at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5. 
    apply$mcV$sp(SparkILoop.scala:974) 
      at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159) 
      at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) 
      at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108) 
      at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) 
      at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop. 
    scala:991) 
      at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:9 
    45) 
      at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:9 
    45) 
      at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) 
      at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) 
      at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) 
      at org.apache.spark.repl.Main$.main(Main.scala:31) 
      at org.apache.spark.repl.Main.main(Main.scala) 
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
      at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) 
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) 
      at java.lang.reflect.Method.invoke(Unknown Source) 
      at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
      at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
      at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
      at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
      at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
    Caused by: java.lang.NullPointerException 
      at java.lang.ProcessBuilder.start(Unknown Source) 
      at org.apache.hadoop.util.Shell.runCommand(Shell.java:482) 
      at org.apache.hadoop.util.Shell.run(Shell.java:455) 
      at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) 
      at org.apache.hadoop.util.Shell.execCommand(Shell.java:808) 
      at org.apache.hadoop.util.Shell.execCommand(Shell.java:791) 
      at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) 
      at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.jav 
    a:582) 
      at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557 
    ) 
      at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599) 
      at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554) 
      at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) 
      ... 62 more 

    <console>:16: error: not found: value sqlContext 
      import sqlContext.implicits._ 
        ^
    <console>:16: error: not found: value sqlContext 
      import sqlContext.sql 
        ^

    scala> 

回答

4

我有完全一樣的問題,並通過一些在您發佈的鏈接解釋可能的解決方案去了,但沒有在工作時間。通過運行spark-shell命令,它會在C中創建tmp \ hive目錄,並最終發現存在權限問題。我確定我的HADOOP_HOME已經正確設置並且包含\ bin \ winutils.exe,然後只需將%HADOOP_HOME%\ bin下的tmp \ hive移動並重新啓動命令提示符。這最終解決了這個問題,但記得以管理員身份運行cmd。希望這有助於I.

2

我面臨同樣的問題,經過調查我發現有spark versionhadoop-2.x.xwinutils.exe之間的兼容性問題。

實驗後,我建議你使用Hadoop的2.7.1 winutils.exe火花2.2.0彬hadoop2.7版本和Hadoop的2.6.0 winutils.exe火花-1.6.0彬hadoop2.6版本,並設置以下環境變量

SCALA_HOME : C:\Program Files (x86)\scala2.11.7; 
JAVA_HOME : C:\Program Files\Java\jdk1.8.0_51 
HADOOP_HOME : C:\Hadoop\winutils-master\hadoop-2.7.1 
SPARK_HOME : C:\Hadoop\spark-2.2.0-bin-hadoop2.7 
PATH : %JAVA_HOME%\bin;%SCALA_HOME%\bin;%HADOOP_HOME%\bin;%SPARK_HOME%\bin; 

創建C:\ tmp目錄\蜂巢 diroctory並使用以下命令授予訪問權限

C:\Hadoop\winutils-master\hadoop-2.7.1\bin>winutils.exe chmod -R 777 C:\tmp\hive 

刪除metastore_db目錄從下面的路徑,如果存在。

C:\Users\<User_Name>\metastore_db 

使用下面的命令來啓動火花外殼

C:>spark-shell 

enter image description here

+0

非常感謝你,它完美的工作... – Naveen

0

我也曾經有過類似的問題(進口spark.implicits._未找到)爲Windows 10
作爲建議
I set
1. HADOOP_HOME(%HADOOP_HOME%/ bin /winutils.exe - 應該存在)
2. %HADOOP_HOME%/ bin/winutils.exe chmod -R 777 F:\ tmp \ hive(spark-2.1.1和winutils(hadoop 2.7。1)在相同的F:驅動器,在我的情況下) enter image description here