2017-08-30 77 views
1

我試圖從pyspark(版本2.2.0)訪問s3(s3a協議),並且遇到了一些困難。從Spark使用S3a協議訪問S3使用Hadoop版本2.7.2

我正在使用Hadoop和AWS sdk包。

pyspark --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 

這裏是我的代碼如下所示:

sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") 
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ACCESS_KEY_ID) 
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_SECRET_ACCESS_KEY) 

rdd = sc.textFile('s3a://spark-test-project/large-file.csv') 
print(rdd.first().show()) 

我得到這個:

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1361, in first 
    rs = self.take(1) 
    File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1313, in take 
    totalParts = self.getNumPartitions() 
    File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 385, in getNumPartitions 
    return self._jrdd.partitions().size() 
    File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__ 
    File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/sql/utils.py", line 63, in deco 
    return f(*a, **kw) 
    File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o34.partitions. 
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 32750D3DED4067BD, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: jAhO0tWTblPEUehF1Bul9WZj/9G7woaHFVxb8gzsOpekam82V/Rm9zLgdLDNsGZ6mPizGZmo6xI= 
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) 
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) 
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) 
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528) 
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031) 
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994) 
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) 
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) 
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258) 
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) 
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315) 
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61) 
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 
    at py4j.Gateway.invoke(Gateway.java:280) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:214) 
    at java.lang.Thread.run(Thread.java:748) 

這是與AWS的Java SDK中的錯誤?我是新來的火花,所以我不知道是否有辦法從AWS以外的地方獲得更好的日誌信息AWS Error Code: null

回答

0

對於它的價值,我在aws的spark-defaults.conf文件中有這條線:

spark.jars.packages com.amazonaws:aws-java-sdk:1.11.99,org.apache.hadoop:hadoop-aws:2.7.2 

我還確保我在設置EC2時使用的安全組可以訪問s3。

這兩件事情之後,我已經沒有問題,從S3讀取文件:

%pyspark 
df = spark.read.csv("s3a://my_bucket/name/") 

另外,如果您使用AWS EMR,你應該能夠訪問S3開箱的:

%pyspark 
df = spark.read.csv("s3://my_bucket/name/") 
0

「不好的請求」是來自S3的恐懼信息,意思是「這不起作用,我們不會告訴你爲什麼」。

the docs中有關於S3A故障排除的整段內容。

如果您的存儲桶只有一個只支持S3「v4」auth協議(frankfurt,london,seoul)的人託管,那麼您需要將fs.s3a.endpoint字段設置爲特定區域的文檔... doc有細節。

否則,請嘗試使用s3a://landsat-pds/scene_list.gz作爲來源。這是一個不需要身份驗證的公共CSV文件。如果你看不到它,那麼你會遇到嚴重的麻煩

+0

嗯,我遇到了OP的麻煩,當我嘗試's3a:// landsat-pds/scene_list.gz'時,我得到了403錯誤。嚴重的麻煩是... – user1129682

+0

https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md –