2017-08-11 81 views
1

我試圖將一個CSV文件加載到Spark DataFrame中。這是我迄今所做的:PySpark正在加載CSV AttributeError:'RDD'對象沒有屬性'_get_object_id'

# sc is an SparkContext. 
appName = "testSpark" 
master = "local" 
conf = SparkConf().setAppName(appName).setMaster(master) 
sc = SparkContext(conf=conf) 
sqlContext = sql.SQLContext(sc) 

# csv path 
text_file = sc.textFile("hdfs:///path/to/sensordata20171008223515.csv") 
df = sqlContext.load(source="com.databricks.spark.csv", header = 'true', path = text_file) 

print df.schema() 

這裏的痕跡:

Traceback (most recent call last): 
File "/home/centos/main.py", line 16, in <module> 
df = sc.textFile(text_file).map(lambda line: (line.split(';')[0], line.split(';')[1])).collect() 
File "/usr/hdp/2.5.6.0-40/spark/python/lib/pyspark.zip/pyspark/context.py", line 474, in textFile 
File "/usr/hdp/2.5.6.0-40/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 804, in __call__ 
File "/usr/hdp/2.5.6.0-40/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 278, in get_command_part 
AttributeError: 'RDD' object has no attribute '_get_object_id' 

我是新來的火花。所以如果有人能告訴我我做錯了什麼,這將會非常有幫助。

回答

1

您無法將RDD傳遞給csv閱讀器。您應該直接使用路徑:

df = sqlContext.load(source="com.databricks.spark.csv", 
    header = 'true', path = "hdfs:///path/to/sensordata20171008223515.csv") 

只有格式(尤其是JSON)數量有限支持RDD作爲輸入參數。

相關問題