2017-02-21 13 views
1

我有這個csv文件(test.csv)時,沒有屬性 'tzinfo' 錯誤包含以下內容:PySpark3解析YYYYMMDDHHMMSS到TimestampType()


COLUMN_STRING;COLUMN_INT;COLUMN_TIMESTAMP 
String_Value_1;123456;20131226224757 
String_Value_2;234567;20141227234858 
String_Value_3;345678;20151228214555 

我試圖進口第3列的時間戳YYYYMMDDHHMMSS成TimestampType()用下面的代碼:


from pyspark.sql.types import * 
data = sc.textFile('test.csv')\ 
       .map(lambda s: s.split(";"))\ 
       .filter(lambda v: v[0] != 'COLUMN_STRING') \ 
       .map(lambda v: (v[0], int(v[1]), v[2])) 

schema = StructType([StructField('COLUMN_STRING',StringType(),False), 
      StructField('COLUMN_INT',IntegerType(),False), 
      StructField('COLUMN_TIMESTAMP',TimestampType(),False)]) 

df = sqlContext.createDataFrame(data, schema) 
df.take(2) 

當我在PySpark3在星火1.6.3集羣(HDP3.5)運行它,我獲得有關錯誤「‘海峽’對象有‘tzinfo’沒有屬性」 ...

這是一個簡單的例子。我必須找到如何在TimestampType()中導入yyyyMMddhhmmss而不更改原始數據,因爲這是一個簡化的測試(我的實際數據非常龐大,因此修改源不是選項)。

我在下面列出了錯誤信息。同時呼籲

發生

任何幫助表示讚賞/謝謝,MT


錯誤

z:org.apache.spark.sql.execution.EvaluatePython.takeAndServe. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.0.0.13): org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1831) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply$mcI$sp(python.scala:126) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2087) 
    at org.apache.spark.sql.execution.EvaluatePython$.takeAndServe(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython.takeAndServe(python.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    ... 1 more 

Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/dataframe.py", line 306, in take 
    self._jdf, num) 
    File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
    answer, self.gateway_client, self.target_id, self.name) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
    format(target_id, ".", name), value) 
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.execution.EvaluatePython.takeAndServe. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.0.0.13): org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1831) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply$mcI$sp(python.scala:126) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2087) 
    at org.apache.spark.sql.execution.EvaluatePython$.takeAndServe(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython.takeAndServe(python.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    ... 1 more 

回答

1

綱要應該反映你有,所以如果你想使用RDD你首先應該分析它的類型:

.map(lambda v: (
    v[0], int(v[1]), datetime.datetime.strptime(v[2], "%Y%m%d%H%M%S")))