2015-08-18 25 views
1

嗨,GC開銷限制錯誤令我瘋狂。我有20個執行者每個使用25 GB我根本不明白它怎麼能把GC開銷我也不那麼大的數據集。一旦這個GC錯誤發生在執行器中,它將會丟失,並且其他執行器由於IOException異常,Rpc客戶端解除關聯,洗牌未找到而緩慢丟失。請幫助我解決這個問題,因爲我是Spark新手,所以我很生氣。提前致謝。即使使用20個執行程序每個使用25GB的執行程序,火花執行程序也會因爲超出GC開銷限制而丟失

WARN scheduler.TaskSetManager: Lost task 7.0 in stage 363.0 (TID 3373, myhost.com): java.lang.OutOfMemoryError: GC overhead limit exceeded 
      at org.apache.spark.sql.types.UTF8String.toString(UTF8String.scala:150) 
      at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:120) 
      at org.apache.spark.sql.columnar.STRING$.actualSize(ColumnType.scala:312) 
      at org.apache.spark.sql.columnar.compression.DictionaryEncoding$Encoder.gatherCompressibilityStats(compressionSchemes.scala:224) 
      at org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.gatherCompressibilityStats(CompressibleColumnBuilder.scala:72) 
      at org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.appendFrom(CompressibleColumnBuilder.scala:80) 
      at org.apache.spark.sql.columnar.NativeColumnBuilder.appendFrom(ColumnBuilder.scala:87) 
      at org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:148) 
      at org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:124) 
      at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:277) 
      at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171) 
      at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78) 
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:242) 
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) 
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) 
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) 
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) 
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) 
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) 
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) 
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) 
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) 
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) 
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) 
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) 
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
      at org.apache.spark.scheduler.Task.run(Task.scala:70) 
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) 

回答

1

當CPU花費超過98%的垃圾收集任務時,會拋出超出GC開銷限制。它發生在Scala使用不可變數據結構時,因爲對於每次轉換,JVM將不得不重新創建大量新對象並從堆中刪除以前的對象。因此,如果這是您的問題,請嘗試使用一些可變數據結構。

請閱讀此頁http://spark.apache.org/docs/latest/tuning.html#garbage-collection-tuning瞭解如何調整GC。