2016-09-15 195 views
4

我在Jupyter筆記本上使用pyspark。下面是如何設置的Spark:pyspark error:AttributeError:'SparkSession'對象沒有屬性'parallelize'

import findspark 
findspark.init(spark_home='/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive', python_path='python2.7') 

    import pyspark 
    from pyspark.sql import * 

    sc = pyspark.sql.SparkSession.builder.master("yarn-client").config("spark.executor.memory", "2g").config('spark.driver.memory', '1g').config('spark.driver.cores', '4').enableHiveSupport().getOrCreate() 

    sqlContext = SQLContext(sc) 

然後,當我這樣做:

spark_df = sqlContext.createDataFrame(df_in) 

其中df_in是大熊貓數據幀。然後我得到以下錯誤:

--------------------------------------------------------------------------- 
AttributeError       Traceback (most recent call last) 
<ipython-input-9-1db231ce21c9> in <module>() 
----> 1 spark_df = sqlContext.createDataFrame(df_in) 


/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/context.pyc in createDataFrame(self, data, schema, samplingRatio) 
    297   Py4JJavaError: ... 
    298   """ 
--> 299   return self.sparkSession.createDataFrame(data, schema, samplingRatio) 
    300 
    301  @since(1.3) 

/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in createDataFrame(self, data, schema, samplingRatio) 
    520    rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio) 
    521   else: 
--> 522    rdd, schema = self._createFromLocal(map(prepare, data), schema) 
    523   jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd()) 
    524   jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json()) 

/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in _createFromLocal(self, data, schema) 
    400   # convert python objects to sql data 
    401   data = [schema.toInternal(row) for row in data] 
--> 402   return self._sc.parallelize(data), schema 
    403 
    404  @since(2.0) 

AttributeError: 'SparkSession' object has no attribute 'parallelize' 

有誰知道我做錯了什麼?謝謝!

回答

10

SparkSession不是SparkContext的替代品,而是相當於SQLContext。只要使用它用同樣的方法,你曾經使用SQLContext

spark.createDataFrame(...) 

,如果你曾經有訪問SparkContext使用sparkContext屬性:

spark.sparkContext 

所以如果你需要SQLContext的向後兼容性,您可以:

SQLContext(sparkContext=spark.sparkContext, sparkSession=spark) 
+0

然後我嘗試:spark_df = sc.createDataFrame(df_in),但spark_df似乎已損壞。 spark_df = sc.createDataFrame(df_in)是否在這裏進行轉換的正確方法? – Edamame

+0

只有'df_in'是'createDataFrame'的有效參數。 – zero323

+0

df_in是一個熊貓數據框。我認爲它應該是有效的? – Edamame

相關問題