2014-05-21 38 views
4

爲了嘗試使用Amazon EMR解決performance issues問題,我試圖使用s3distcp將文件從S3複製到我的EMR羣集以進行本地處理。作爲第一個測試,我使用--groupBy選項將單個目錄中的一天數據(2160個文件)複製到一個(或幾個)文件中。s3distcp在顯示100%後掛起

該工作似乎運行得很好,向我顯示地圖/減少進度到100%,但在這一點上,這個過程掛起,永遠不會回來。我如何弄清楚發生了什麼?

源文件是存儲在S3中的GZip文本文件,每個文件約30kb。這是一個vanilla Amazon EMR集羣,我從主節點的shell運行s3distcp。

[email protected]:~$ hadoop jar /home/hadoop/lib/emr-s3distcp-1.0.jar --src s3n://xxx/click/20140520 --dest hdfs:////data/click/20140520 --groupBy ".*(20140520).*" --outputCodec lzo 
14/05/21 20:06:32 INFO s3distcp.S3DistCp: Running with args: [Ljava.lang.String;@26f3bbad 
14/05/21 20:06:35 INFO s3distcp.S3DistCp: Using output path 'hdfs:/tmp/9f423c59-ec3a-465e-8632-ae449d45411a/output' 
14/05/21 20:06:35 INFO s3distcp.S3DistCp: GET http://169.254.169.254/latest/meta-data/placement/availability-zone result: us-west-2b 
14/05/21 20:06:35 INFO s3distcp.S3DistCp: Created AmazonS3Client with conf KeyId AKIAJ5KT6QSV666K6KHA 
14/05/21 20:06:37 INFO s3distcp.FileInfoListing: Opening new file: hdfs:/tmp/9f423c59-ec3a-465e-8632-ae449d45411a/files/1 
14/05/21 20:06:38 INFO s3distcp.S3DistCp: Created 1 files to copy 2160 files 
14/05/21 20:06:38 INFO mapred.JobClient: Default number of map tasks: null 
14/05/21 20:06:38 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 72 
14/05/21 20:06:38 INFO mapred.JobClient: Default number of reduce tasks: 3 
14/05/21 20:06:39 INFO security.ShellBasedUnixGroupsMapping: add hadoop to shell userGroupsCache 
14/05/21 20:06:39 INFO mapred.JobClient: Setting group to hadoop 
14/05/21 20:06:39 INFO mapred.FileInputFormat: Total input paths to process : 1 
14/05/21 20:06:39 INFO mapred.JobClient: Running job: job_201405211343_0031 
14/05/21 20:06:40 INFO mapred.JobClient: map 0% reduce 0% 
14/05/21 20:06:53 INFO mapred.JobClient: map 1% reduce 0% 
14/05/21 20:06:56 INFO mapred.JobClient: map 4% reduce 0% 
14/05/21 20:06:59 INFO mapred.JobClient: map 36% reduce 0% 
14/05/21 20:07:00 INFO mapred.JobClient: map 44% reduce 0% 
14/05/21 20:07:02 INFO mapred.JobClient: map 54% reduce 0% 
14/05/21 20:07:05 INFO mapred.JobClient: map 86% reduce 0% 
14/05/21 20:07:06 INFO mapred.JobClient: map 94% reduce 0% 
14/05/21 20:07:08 INFO mapred.JobClient: map 100% reduce 10% 
14/05/21 20:07:11 INFO mapred.JobClient: map 100% reduce 19% 
14/05/21 20:07:14 INFO mapred.JobClient: map 100% reduce 27% 
14/05/21 20:07:17 INFO mapred.JobClient: map 100% reduce 29% 
14/05/21 20:07:20 INFO mapred.JobClient: map 100% reduce 100% 
[hangs here] 

的作業顯示爲:

[email protected]:~$ hadoop job -list 
1 job currently running 
JobId State StartTime  UserName  Priority  SchedulingInfo 
job_201405211343_0031 1  1400702799339 hadoop NORMAL NA 

並沒有什麼在目標HDFS目錄:

[email protected]:~$ hadoop dfs -ls /data/click/ 

任何想法?

+0

你確定它永遠不會回到過去,或者只是做了第一桶迅速然後採取永遠的休息嗎?這是我注意到的。 – gae123

回答

0

hadoop @ ip-xxx:〜$ hadoop jar /home/hadoop/lib/emr-s3distcp-1.0.jar --src s3n:// xxx/click/20140520 **/** --dest hdfs: ////數據/點擊/ 20140520 **/** --groupBy 「(20140520)。」 --outputCodec LZO

我面臨同樣的問題。我需要的只是在目錄的末尾添加一個額外的斜槓。因此,它完成,並與統計顯示,prev它掛在100%

+0

does --outputCodec lzo提供索引lzo壓縮(可以在運行hadoop作業時分成多個分割)? –

0

使用s3://而不是s3n。

hadoop jar /home/hadoop/lib/emr-s3distcp-1.0.jar --src s3:// xxx/click/20140520 --dest hdfs://// data/click/20140520 --groupBy「 。(20140520)。「--outputCodec LZO