2017-06-22 99 views
2

沒有安裝的Apache 2.1.1星火我已經安裝了Apache 2.1.1星火在Windows 10,與Java 1.8和Python 3.6版本4.3.1蟒蛇。我也下載了winutils.exe和設置環境avriables爲JAVA_HOMEHADOOP_HOMESPARK_HOME以及更新路徑變量。我也跑winutils.exe chmod -R 777 \tmp\hive。但是,在cmd提示符下運行pyspark時,出現以下錯誤。可以運行在Windows 10

請請能有人幫助,讓我知道如果我錯過了任何重要的細節提前

謝謝!

c:\Spark>bin\pyspark 
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 
Type "help", "copyright", "credits" or "license" for more information. 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
Traceback (most recent call last): 
    File "c:\Spark\python\pyspark\sql\utils.py", line 63, in deco 
    return f(*a, **kw) 
    File "c:\Spark\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value 
py4j.protocol.Py4JJavaError: **An error occurred while calling o22.sessionState. 
: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':** 
     at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981) 
     at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110) 
     at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:498) 
     at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 
     at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 

我在啓動[spark-shell]時仍然出錯,但它看起來像Spark啓動後,因爲我得到了'Welcome to Spark'部分。我得到的錯誤是爲我工作

C:\Spark>bin\spark-shell 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/bin/../jars/datanucleus-api-jdo-3.2.6.jar." 
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/bin/../jars/datanucleus-rdbms-3.2.9.jar." 
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/bin/../jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/jars/datanucleus-core-3.2.10.jar." 
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState': 
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981) 
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110) 
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109) 
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878) 
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878) 
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) 
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) 
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) 
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) 
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) 
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:878) 
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:96) 
... 47 elided 
Caused by: java.lang.reflect.InvocationTargetException: 
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog': 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:978) 
... 58 more 
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog': 
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:169) 
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86) 
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101) 
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101) 
at scala.Option.getOrElse(Option.scala:121) 
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:101) 
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100) 
at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:157) 
at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:32) 
... 63 more 
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:166) 
... 71 more 
Caused by: java.lang.reflect.InvocationTargetException: 
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264) 
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:358) 
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:262) 
at org.apache.spark.sql.hive.HiveExternalCatalog.<init>(HiveExternalCatalog.scala:66) 
... 76 more 
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V 
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method) 
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524) 
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:478) 
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532) 
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509) 
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305) 
at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639) 
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:561) 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) 
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:188) 
... 84 more 
14: error: not found: value spark 
    import spark.implicits._ 
     ^
14: error: not found: value spark 
    import spark.sql 
     ^
Welcome to 
+0

謝謝@Alfrabravo! – AmyJ

+0

啓動'spark-shell'時是否也有錯誤,或者是否特定於'pyspark'? –

+0

@SamsonScharfrichter我已經更新了我的問題,但肯定似乎[火花殼]推出,但[pyspark]不 – AmyJ

回答

0

設置如下:「我沒有使用winutils.exe」: - 安裝pyspark和findspark使用「蟒蛇命令提示符」作爲

pip3 install pyspark 

pip3 install findspark 

因爲您已經下載了火花設置。解壓縮並保存在「C」驅動器中,即「C:\ spark-2.2.0-bin-hadoop2.7」並創建新的環境變量「SPARK_HOME」並將其設置爲「C:\ spark-2.2.0-bin -hadoop2.7 \ bin「並在系統變量中打開」path「變量,並在那裏添加相同的變量。 現在打開命令提示符,來自「C:\用戶*」來:做CD「C:\」 ..兩次並運行以下命令

set SPARK_HOME='spark-2.2.0-bin-hadoop2.7' 

,你是好去。 現在你只需要在你的jupyter筆記本中導入pyspark之前導入文件的位置。使用下面的代碼: -

import findspark 
findspark.init('C:\spark-2.2.0-bin-hadoop2.7') 
import pyspark 
相關問題