2017-02-14 134 views
-1

我正在嘗試從RDD讀取數據,但出現錯誤。 請指教。 該文件存在於HDFS中。我使用hadoop文件系統命令將文件移動到HDFS。PYSPARK:從RDD讀取錯誤

代碼:

baby_names = sc.textFile("/user/rahul/baby_names.csv") 

rows = baby_names.map(lambda line:line.split(",")) 

for row in rows.take(rows.count()):print(row[1]) 

錯誤:

Py4JJavaError        Traceback (most recent call last) 
<ipython-input-7-b9dcd91a9f1c> in <module>() 
----> 1 for row in rows.take(rows.count()):print(row[1]) 

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in count(self) 
    1039   3 
    1040   """ 
-> 1041   return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() 
    1042 
    1043  def stats(self): 

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in sum(self) 
    1030   6.0 
    1031   """ 
-> 1032   return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add) 
    1033 
    1034  def count(self): 

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in fold(self, zeroValue, op) 
    904   # zeroValue provided to each partition is unique from the one provided 
    905   # to the final reduce call 
--> 906   vals = self.mapPartitions(func).collect() 
    907   return reduce(op, vals, zeroValue) 
    908 

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in collect(self) 
    807   """ 
    808   with SCCallSiteSync(self.context) as css: 
--> 809    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 
    810   return list(_load_from_socket(port, self._jrdd_deserializer)) 
    811 

/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args) 
    1131   answer = self.gateway_client.send_command(command) 
    1132   return_value = get_return_value(
-> 1133    answer, self.gateway_client, self.target_id, self.name) 
    1134 
    1135   for temp_arg in temp_args: 

/home/rahul/Hadoop/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw) 
    61  def deco(*a, **kw): 
    62   try: 
---> 63    return f(*a, **kw) 
    64   except py4j.protocol.Py4JJavaError as e: 
    65    s = e.java_exception.toString() 

/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 
    317     raise Py4JJavaError(
    318      "An error occurred while calling {0}{1}{2}.\n". 
--> 319      format(target_id, ".", name), value) 
    320    else: 
    321     raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. 
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/user/rahul/baby_names.csv 
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287) 
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) 
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315) 
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958) 
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) 
    at org.apache.spark.rdd.RDD.collect(RDD.scala:934) 
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453) 
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 
    at py4j.Gateway.invoke(Gateway.java:280) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:214) 
    at java.lang.Thread.run(Thread.java:745) 

如果任何環節都沒有火花的配置更改,請分享。

+0

org.apache.hadoop.mapred.InvalidInputException:輸入路徑不存在:文件:/user/rahul/baby_names.csv – eliasah

+0

看起來像你提供了本地路徑 - 你在哪裏把文件放在hdfs中?發佈你用來把這個csv文件放在hdfs中的「移動」命令 –

回答

1

爲什麼不使用collect(),如果你想讀取所有的行?

baby_names = sc.textFile("/user/rahul/baby_names.csv") 

rows = baby_names.map(lambda line:line.split(",")) \ 
       .filter(lambda line: len(line)>1) \ 
       .map(lambda line: (line[0],line[1])) 

for row in rows.collect():print(row) 

或者

no_rows = rows.count() 
for row in rows.take(no_rows):print(row) 

collect() - Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.

count() - Return the number of elements in the dataset.

take(n) - Return an array with the first n elements of the dataset.