2016-09-23 85 views
2

我的EMR位於us-west-1,但是我的S3存儲區位於我們東區-1,並且出現錯誤。來自AWS EMR的跨區域S3訪問Spark

我試過s3://{bucketname}.s3.amazon.com但這會創建一個新的存儲桶s3.amazon.com

如何訪問s3存儲區跨區域?

com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Moved Permanently (Service: Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: FB1139D9BD8F409B), S3 Extended Request ID: pWK3X9BBRp8BLlXEHOx008RCdlZC64YFTounDYGtnwsAneR0IDP1Z/gmDudRoqWhDArfYLNRxk4= 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1389) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:902) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3826) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1015) 
    at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:991) 
    at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:212) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy38.retrieveMetadata(Unknown Source) 
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:780) 
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1428) 
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.exists(EmrFileSystem.java:313) 
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:85) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) 
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:487) 
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211) 
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194) 
    at org.apache.spark.sql.DataFrameWriter.text(DataFrameWriter.scala:520) 
+0

這曾經被支持,但不知何故最近改變在EMR中。似乎不再允許訪問不同地區的S3存儲桶。它似乎也影響了歷史AMI,所以這是EMR本身的變化,而不是與emr-5.0相關。 –

+0

是的,我們正在使用EMR 4.6進行跨區域s3訪問,並且使用EMR 5.0進行spark 2.0升級時出現此問題。我希望有一個明確的方式,我可以通過使用'class InstanceProfileCredentialsProvider'或類似的東西來設置不同的區域... – codingtwinky

+1

@JohnRotenstein這是有問題的。我沒有遇到過這樣的問題,但我們在這種情況下做了什麼?請不要告訴我們必須使用S3 API將數據從一個區域複製到另一個區域,以便我們可以訪問它。而更爲荒謬的是歷史AMI受其影響。這是一個巨大的迴歸。 – eliasah

回答

3

該解決方案爲我工作在EMR-5.0.0/EMR-5.0.3:

以下屬性添加到core-site configuration

"fs.s3n.endpoint":"s3.amazonaws.com" 
+1

終於有一段時間來測試這個。它似乎在爲s3n,s3a和s3工作。它最近可能已經發布了EMR 5.1.0,但發行說明未指定。 http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-whatsnew.html – codingtwinky

0

如建議通過在註釋中@codingtwinky,EMR 4.6.0沒有在emr.hadoop.fs層這個問題。我的hadoop作業現在可以在EMR 4.6.0中使用,但不適用於5.0.0或4.7.0。

1

聯繫到AWS支持團隊,TLDR是他們意識到這個問題,他們目前正在研究這個問題,並希望爲下一個EMR版本解決這個問題,但我沒有eta。

對於「s3a」,您可以在運行時使用自定義s3 end points within spark,但這不適用於「s3」或「s3n」。

此外,您可以將EMR配置爲在創建時指向另一個s3區域,但一旦以這種方式進行配置,您就會陷入該區域。

根據支持團隊的說明,此EMRFS的區域綁定應用於EMR 4.7.2之後。