2016-07-22 196 views
4

我完全迷失在有線情況下。現在我有一個列表li從pyspark.sql的列表中創建一個數據框

li = example_data.map(lambda x: get_labeled_prediction(w,x)).collect() 
print li, type(li) 

,輸出類似,

[(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)] <type 'list'> 

當我嘗試從這個名單

m = sqlContext.createDataFrame(l, ["prediction", "label"]) 

它扔了錯誤信息創建一個數據幀

TypeError         Traceback (most recent call last) 
<ipython-input-90-4a49f7f67700> in <module>() 
56 l = example_data.map(lambda x: get_labeled_prediction(w,x)).collect() 
57 print l, type(l) 
---> 58 m = sqlContext.createDataFrame(l, ["prediction", "label"]) 
59 ''' 
60 g = example_data.map(lambda x:gradient_summand(w, x)).sum() 

/databricks/spark/python/pyspark/sql/context.py in createDataFrame(self, data, schema, samplingRatio) 
423    rdd, schema = self._createFromRDD(data, schema, samplingRatio) 
424   else: 
--> 425    rdd, schema = self._createFromLocal(data, schema) 
426   jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd()) 
427   jdf = self._ssql_ctx.applySchemaToPythonRDD(jrdd.rdd(), schema.json()) 

/databricks/spark/python/pyspark/sql/context.py in _createFromLocal(self, data, schema) 
339 
340   if schema is None or isinstance(schema, (list, tuple)): 
--> 341    struct = self._inferSchemaFromList(data) 
342    if isinstance(schema, (list, tuple)): 
343     for i, name in enumerate(schema): 

/databricks/spark/python/pyspark/sql/context.py in _inferSchemaFromList(self, data) 
239    warnings.warn("inferring schema from dict is deprecated," 
240       "please use pyspark.sql.Row instead") 
--> 241   schema = reduce(_merge_type, map(_infer_schema, data)) 
242   if _has_nulltype(schema): 
243    raise ValueError("Some of types cannot be determined after inferring") 

/databricks/spark/python/pyspark/sql/types.py in _infer_schema(row) 
831   raise TypeError("Can not infer schema for type: %s" % type(row)) 
832 
--> 833  fields = [StructField(k, _infer_type(v), True) for k, v in items] 
834  return StructType(fields) 
835 

/databricks/spark/python/pyspark/sql/types.py in _infer_type(obj) 
808    return _infer_schema(obj) 
809   except TypeError: 
--> 810    raise TypeError("not supported type: %s" % type(obj)) 
811 
812 

TypeError: not supported type: <type 'numpy.float64'> 

但是,當我硬編碼該列表中的行

tt = sqlContext.createDataFrame([(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)], ["prediction", "label"]) 
tt.collect() 

它運作良好。

[Row(prediction=0.0, label=59.0), 
Row(prediction=0.0, label=51.0), 
Row(prediction=0.0, label=81.0), 
Row(prediction=0.0, label=8.0), 
Row(prediction=0.0, label=86.0), 
Row(prediction=0.0, label=86.0), 
Row(prediction=0.0, label=60.0), 
Row(prediction=0.0, label=54.0), 
Row(prediction=0.0, label=54.0), 
Row(prediction=0.0, label=84.0)] 

是什麼引起了這個問題,以及如何解決它?任何暗示將不勝感激。

回答

4

你有一個list of float64,我認爲它不喜歡那種類型。另一方面,當你硬編碼時,它只是一個list of float
這是一個question與回答如何從numpy的數據類型轉換爲python的本地答案。

+0

謝謝,limbo。這正是我正在尋找的。 –

+0

我遵循你的建議答案,但它不適合我。我得到了TypeError:不支持的類型:' –