2012-04-29 14 views
3

我在Amazon Elastic MapReduce集羣上從命令行運行Mahout 0.6,嘗試使用canopy-cluster〜1500短文檔,並且作業保持失敗,出現「Error:Java heap space 「 信息。在彈性MapReduce上的Mahout:Java堆空間

基於這裏和其他地方前面的問題,我已經拍成每個內存旋鈕可以找我:

  • 的conf/hadoop-env.sh:將所有的堆空間有高達1.5GB的小型實例,甚至4GB的大型實例。

  • 的conf/mapred-site.xml中:添加mapred {地圖,減少} .child.java.opts特性,並設置其值爲-Xmx4000m

  • $ MAHOUT_HOME /斌/象夫:增加JAVA_HEAP_MAX並將MAHOUT_HEAPSIZE設置爲6GB(在大型實例中)。

而問題仍然存在。我一直在反對這個問題太久 - 有人有什麼建議嗎?

完整的命令和輸出看起來像這樣(大實例的集羣上運行,在希望它會緩解這個問題):

[email protected]:~$ mahout-distribution-0.6/bin/mahout canopy -i sparse-data/2010/tf-vectors -o canopy-out/2010 -dm org.apache.mahout.common.distance.TanimotoDistanceMeasure -ow -t1 0.5 -t2 0.005 -cl 
run with heapsize 6000 
-Xmx6000m 
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath. 
Running on hadoop, using HADOOP_HOME=/home/hadoop 
No HADOOP_CONF_DIR set, using /home/hadoop/conf 
MAHOUT-JOB: /home/hadoop/mahout-distribution-0.6/mahout-examples-0.6-job.jar 
12/04/29 19:50:23 INFO common.AbstractJob: Command line arguments: {--clustering=null, --distanceMeasure=org.apache.mahout.common.distance.TanimotoDistanceMeasure, --endPhase=2147483647, --input=sparse-data/2010/tf-vectors, --method=mapreduce, --output=canopy-out/2010, --overwrite=null, --startPhase=0, --t1=0.5, --t2=0.005, --tempDir=temp} 
12/04/29 19:50:24 INFO common.HadoopUtil: Deleting canopy-out/2010 
12/04/29 19:50:24 INFO canopy.CanopyDriver: Build Clusters Input: sparse-data/2010/tf-vectors Out: canopy-out/2010 Measure: [email protected]8 t1: 0.5 t2: 0.0050 
12/04/29 19:50:24 INFO mapred.JobClient: Default number of map tasks: null 
12/04/29 19:50:24 INFO mapred.JobClient: Setting default number of map tasks based on cluster size to : 24 
12/04/29 19:50:24 INFO mapred.JobClient: Default number of reduce tasks: 1 
12/04/29 19:50:25 INFO mapred.JobClient: Setting group to hadoop 
12/04/29 19:50:25 INFO input.FileInputFormat: Total input paths to process : 1 
12/04/29 19:50:25 INFO mapred.JobClient: Running job: job_201204291846_0004 
12/04/29 19:50:26 INFO mapred.JobClient: map 0% reduce 0% 
12/04/29 19:50:45 INFO mapred.JobClient: map 27% reduce 0% 
[ ... Continues fine until... ] 
12/04/29 20:05:54 INFO mapred.JobClient: map 100% reduce 99% 
12/04/29 20:06:12 INFO mapred.JobClient: map 100% reduce 0% 
12/04/29 20:06:20 INFO mapred.JobClient: Task Id : attempt_201204291846_0004_r_000000_0, Status : FAILED 
Error: Java heap space 
12/04/29 20:06:41 INFO mapred.JobClient: map 100% reduce 33% 
12/04/29 20:06:44 INFO mapred.JobClient: map 100% reduce 68% 
[.. REPEAT SEVERAL ITERATIONS, UNITL...] 
12/04/29 20:37:58 INFO mapred.JobClient: map 100% reduce 0% 
12/04/29 20:38:09 INFO mapred.JobClient: Job complete: job_201204291846_0004 
12/04/29 20:38:09 INFO mapred.JobClient: Counters: 23 
12/04/29 20:38:09 INFO mapred.JobClient: Job Counters 
12/04/29 20:38:09 INFO mapred.JobClient: Launched reduce tasks=4 
12/04/29 20:38:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=94447 
12/04/29 20:38:09 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 
12/04/29 20:38:09 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 
12/04/29 20:38:09 INFO mapred.JobClient: Rack-local map tasks=1 
12/04/29 20:38:09 INFO mapred.JobClient: Launched map tasks=1 
12/04/29 20:38:09 INFO mapred.JobClient: Failed reduce tasks=1 
12/04/29 20:38:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=23031 
12/04/29 20:38:09 INFO mapred.JobClient: FileSystemCounters 
12/04/29 20:38:09 INFO mapred.JobClient: HDFS_BYTES_READ=24100612 
12/04/29 20:38:09 INFO mapred.JobClient: FILE_BYTES_WRITTEN=49399745 
12/04/29 20:38:09 INFO mapred.JobClient: File Input Format Counters 
12/04/29 20:38:09 INFO mapred.JobClient: Bytes Read=24100469 
12/04/29 20:38:09 INFO mapred.JobClient: Map-Reduce Framework 
12/04/29 20:38:09 INFO mapred.JobClient: Map output materialized bytes=49374728 
12/04/29 20:38:09 INFO mapred.JobClient: Combine output records=0 
12/04/29 20:38:09 INFO mapred.JobClient: Map input records=409 
12/04/29 20:38:09 INFO mapred.JobClient: Physical memory (bytes) snapshot=2785939456 
12/04/29 20:38:09 INFO mapred.JobClient: Spilled Records=409 
12/04/29 20:38:09 INFO mapred.JobClient: Map output bytes=118596530 
12/04/29 20:38:09 INFO mapred.JobClient: CPU time spent (ms)=83190 
12/04/29 20:38:09 INFO mapred.JobClient: Total committed heap usage (bytes)=2548629504 
12/04/29 20:38:09 INFO mapred.JobClient: Virtual memory (bytes) snapshot=4584386560 
12/04/29 20:38:09 INFO mapred.JobClient: Combine input records=0 
12/04/29 20:38:09 INFO mapred.JobClient: Map output records=409 
12/04/29 20:38:09 INFO mapred.JobClient: SPLIT_RAW_BYTES=143 
Exception in thread "main" java.lang.InterruptedException: Canopy Job failed processing sparse-data/2010/tf-vectors 
at org.apache.mahout.clustering.canopy.CanopyDriver.buildClustersMR(CanopyDriver.java:349) 
at org.apache.mahout.clustering.canopy.CanopyDriver.buildClusters(CanopyDriver.java:236) 
at org.apache.mahout.clustering.canopy.CanopyDriver.run(CanopyDriver.java:145) 
at org.apache.mahout.clustering.canopy.CanopyDriver.run(CanopyDriver.java:109) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
at org.apache.mahout.clustering.canopy.CanopyDriver.main(CanopyDriver.java:61) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
at java.lang.reflect.Method.invoke(Method.java:597) 
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) 
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:188) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
at java.lang.reflect.Method.invoke(Method.java:597) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:156) 

回答

1

您當地的Hadoop配置就什麼都沒有做怎麼樣EMR運行,也不會有這些環境變量。您必須自行配置EMR,並且沒有與此相關的一些等價物。例如,您的工作者記憶取決於您要求的實例類型。

該錯誤並不表示與內存有關。 EMR等待完成時由於某種原因中斷了工作。它失敗了嗎?

+0

感謝您的回覆!澄清:我更改的配置全部在EMR主節點上,而不是在本地計算機上。相同的聚類可以處理相同數據集的較小變化;任何想法除了內存問題可能會導致問題? –

+0

你正在運行你自己的集羣嗎?無論如何,我沒有看到內存是問題。跑步者一直未能完成等待工作中沒有發生什麼事的工作。去檢查工作人員的日誌? –

+0

在日誌Sean中提到了OOM,我們可以看到在12/04/29 20:06:20失敗的任務失敗,我想所有的重新嘗試都失敗了。我不熟悉天篷,但是決定4個減少任務的是什麼? –

3

在正常情況下,您可以通過設置「mapred.map.child.java.opts」和/或「mapred.reduce.child.java.opts」來增加map/reduce子任務的內存分配像「-Xmx3g」。

但是,當您在AWS上運行時,您對這些設置的直接控制較少。 Amazon提供了一種在啓動時配置EMR集羣的機制,稱爲「引導操作」。

對於內存密集型工作流程,即任何Mahout :),請查看「MemoryIntensive」引導程序。

http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/Bootstrap.html#PredefinedBootstrapActions_MemoryIntensive

+0

這是正確的答案 - 幫了大忙。謝謝! –

相關問題