2

我正在使用spark 1.5.0並使用Spark應用程序。應用程序從HDFS讀取文件,將rdd轉換爲數據幀並在每個數據幀上執行多個查詢。Spark Streaming應用程序在運行24小時後提供了OOM

該應用程序完全運行約24小時,然後崩潰。 應用程序主日誌/驅動器日誌顯示:

Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: GC overhead limit exceeded 
at java.lang.Class.getDeclaredMethods0(Native Method) 
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) 
at java.lang.Class.getDeclaredMethod(Class.java:2128) 
at java.io.ObjectStreamClass.getInheritableMethod(ObjectStreamClass.java:1442) 
at java.io.ObjectStreamClass.access$2200(ObjectStreamClass.java:72) 
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:508) 
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472) 
at java.security.AccessController.doPrivileged(Native Method) 
at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:472) 
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369) 
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134) 
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) 
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) 
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) 
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) 
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) 
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) 
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) 
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) 
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) 
    at scala.collection.immutable.$colon$colon.writeObject(List.scala:379) 
    at sun.reflect.GeneratedMethodAccessor1511.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:497) 
    at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028) 
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496) 
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) 
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) 
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548) 
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509) 
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432) 
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) 
Exception in thread "JobGenerator" java.lang.OutOfMemoryError: GC overhead limit exceeded 
    at java.util.zip.ZipCoder.getBytes(ZipCoder.java:80) 
    at java.util.zip.ZipFile.getEntry(ZipFile.java:310) 
    at java.util.jar.JarFile.getEntry(JarFile.java:240) 
    at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128) 
    at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132) 
    at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150) 
    at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:238) 
    at java.lang.Class.getResourceAsStream(Class.java:2223) 
    at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:38) 
    at org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:81) 
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:187) 
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) 
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2032) 
    at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:314) 
    at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:313) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.map(RDD.scala:313) 
    at org.apache.spark.streaming.dstream.MappedDStream$$anonfun$compute$1.apply(MappedDStream.scala:35) 
    at org.apache.spark.streaming.dstream.MappedDStream$$anonfun$compute$1.apply(MappedDStream.scala:35) 
    at scala.Option.map(Option.scala:145) 
    at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35) 
    at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350) 
    at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350) 
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) 
    at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349) 
    at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349) 
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399) 
    at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:344) 
    at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:342) 
    at scala.Option.orElse(Option.scala:257) 

我收集司機堆轉儲和它說可能存在內存泄漏是org.apache.spark.sql.execution.ui.SQLListener我applciation主網址

另外,我可以看到成千上萬的SQL tabs eg:-> SQL 1, SQL2 .. SQL 2000這些選項卡數量不斷增加。

是否有人知道爲什麼這些SQL選項卡不斷增加,並對GC異常提出建議。 謝謝

+0

你能添加你的工作代碼嗎? – maasg

回答

相關問題