2016-08-09 46 views
0

我們目前正在評估Spark 1.6的Spark升級,但我們有一個非常奇怪的錯誤,它阻止了我們進行此轉換。Spark 2.0 S3元數據加載掛在多個數據幀上讀取

我們的一個要求是從S3讀取多個數據點並將它們結合在一起。當我們加載50個數據集時,沒有問題。但是,在第51個數據集加載時,所有內容都會掛起以查找元數據。這不是間歇性的,而且會一直髮生。

數據格式是avro容器,我們使用spark-avro 3.0.0。

有沒有答案?

  • 這與套接字超時issue無關,所有套接字線程都未被阻止。

<<main thread dump>> 
java.lang.Thread.sleep(Native Method) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doPauseBeforeRetry(AmazonHttpClient.java:1475) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.pauseBeforeRetry(AmazonHttpClient.java:1439) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:794) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015) 
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991) 
com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212) 
sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
java.lang.reflect.Method.invoke(Method.java:498) 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
com.sun.proxy.$Proxy36.retrieveMetadata(Unknown Source) 
com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780) 
org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428) 
com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313) 
org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:289) 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:324) 
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149) 
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132) 

回答