2017-02-05 86 views
0

目標:我試圖在[從S3讀取]在地板文件上運行查詢,然後在另一個S3存儲桶中將它作爲單個製表符分隔的文本文件寫出。所有這些都是在亞馬遜上的EMR集羣上運行的Spark應用中完成的。Spark作業在Amazon EMR上給出NullPointerException

我已經閱讀了StackOverflow上的其他類似問題,但無濟於事。

//Read parquet file 
final Dataset<Row> df = spark.read().parquet(fileName) 
//Register temp table for convinience 
df.registerTempTable(tableName) 

//Valid SQL query string - Verified using data bricks on the same parquet file 
final String queryString = //SQL String 

//Result of running sql query on parquet file 
final Dataset<Row> result = spark.sql(queryString) 

//Check if result is null - result is NOT NULL 

final JavaRDD<Row> rowJavaRDD = result.toJavaRDD() 

//Check if rowJavaRDD is null - rowJavaRDD is NOT NULL 

//Coalesce and write to a text file 
rowJavaRDD.map(r -> r.mkString("\t")).coalesce(1).saveAsTextFile(savePath) 

但我正在逐漸上線rowJavaRDD.map(r -> r.mkString("\t")).coalesce(1).saveAsTextFile(savePath)

我一直在使用broadcast也嘗試了NPE,但沒有做任何事情。嘗試沒有融合,但仍然得到相同的錯誤。

我已經檢查了savePath是有效的,不存在S3權限問題

我試圖做同樣的地方在./spark-shell使用同一拼花文件Scala和它工作得很好:/

運行火花2.0.2在EMR集羣上。 [版本2.1給出一個classCastException在別的東西 - 所以更新是一個更大的麻煩]

任何幫助,將不勝感激。

堆棧跟蹤如下:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in stage 38.0 failed 4 times, most recent failure: Lost task 
7.3 in stage 38.0 (TID 13586, ip-10-30-1-150.ec2.internal): java.lang.NullPointerException 
     at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:231) 
     at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) 
     at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) 
     at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370) 
     at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
     at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
     at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
     at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1203) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203) 
     at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1211) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1190) 
     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
     at org.apache.spark.scheduler.Task.run(Task.scala:86) 
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454) 
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442) 
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441) 
     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
     at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441) 
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
     at scala.Option.foreach(Option.scala:257) 
     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) 
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667) 
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622) 
     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611) 
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632) 
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873) 
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886) 
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1219) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
     at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1161) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1064) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1030) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1030) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
     at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1030) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:956) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:956) 
     at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:956) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
     at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:955) 
     at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1459) 
     at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1438) 
     at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1438) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
     at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1438) 
     at org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:549) 
     at org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45) 

相關鏈接

1)NPE saveAsTextFile

2)RDD as textfile

回答

0

正如你提供自己的第一個相關鏈接提到(NPE saveAsTextFile ),這是在您調用saveAsTex時發生的事實tFile()並不意味着該問題與該行有任何關係,因爲到目前爲止所做的所有轉換都是延遲執行的。

快速Google搜索「OnHeapColumnVector NullPointerException」爲我打開了SPARK-16518。看起來這是一個自2.0.0以來一直存在的問題,並且仍然沒有解決。

+0

我在其他地板文件上也使用了相同的命令,它工作正常。只是在這個文件中,我得到了NPE。 我在本地下載了麻煩的鑲木地板文件,並嘗試了相同的命令 - 並且它正常工作:// –

0

它看起來像org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt命中空列(由於錯誤)。檢查是否在您的parquet文件的模式中找到int32。請參閱此處的另一個版本:https://issues.apache.org/jira/browse/HIVE-14294

此外,共享實木複合地板文件的架構。

相關問題