2015-04-28 78 views
0

我已將整個文件夾作爲MR作業的輸入。運行MapReduce程序時出現「Java堆空間:OutOfMemoryError」

我已經使用CombineFileBinaryInputFormat(擴展了CombineFileInputFormat)作爲我的MR作業的輸入格式。我在我的CombineFileBinaryInputFormat構造函數中使用了「setMaxSplitSize(262144000)」這個方法,因爲我的塊大小是250MB。文件拆分是通過數據包發生的,我應該在某個地方進行檢查以測試限制是否超過250MB或者是否隱含。完整的代碼可在here獲得。

但是我在運行MapReduce程序時遇到了「Java堆空間」錯誤。

下面的代碼,以供參考部分:

public class CombineBinaryInputFormat extends CombineFileInputFormat<KeyWritable, ValueWritable>{ 

    public CombineBinaryInputFormat(){ 
     super(); 
     setMaxSplitSize(262144000); 
     } 

My StackTrace: 
============== 
    15/05/05 11:52:47 INFO input.FileInputFormat: Total input paths to process : 318 
    15/05/05 11:52:47 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 52027734 
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: number of splits:1 
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local634564612_0001 
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job. 
end-notification.max.attempts; Ignoring. 
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 
    15/05/05 11:52:48 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 
    15/05/05 11:52:48 INFO mapreduce.Job: Running job: job_local634564612_0001 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter set in config null 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Waiting for map tasks 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Starting task: attempt_local634564612_0001_m_000000_0 
    15/05/05 11:52:48 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 
    15/05/05 11:52:48 INFO mapred.MapTask: Processing split: Paths:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0+78550,/user/usr/local/upload/20120713T07-45-43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0+32768,/user/usr/local/upload/20120713T07-45-43.718556000Z_85.26.235.102-25300.c2s_ndttrace:0+10130,/user/usr/local/upload 
     ..... 
     ..... 
     ..... 
/20120713T08-33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0+19148,/user/usr/local/upload/20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0+63014,/user/usr/local/upload/20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0+13658,/user/usr/local/upload/20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0+126542,/user/usr/local/upload/20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0+92676,/user/usr/local/upload/20120713T08-34-08.378534000Z_87.7.113.115-62238.c2s_ndttrace:0+49410,/user/usr/local/upload/20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0+2666092 
    15/05/05 11:52:49 INFO mapreduce.Job: Job job_local634564612_0001 running in uber mode : false 
    15/05/05 11:52:49 INFO mapreduce.Job: map 0% reduce 0% 
    15/05/05 11:52:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 
    15/05/05 11:52:53 INFO mapred.MapTask: (EQUATOR) 0 kvi 78643196(314572784) 
    15/05/05 11:52:53 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 300 
    15/05/05 11:52:53 INFO mapred.MapTask: soft limit at 251658240 
    15/05/05 11:52:53 INFO mapred.MapTask: bufstart = 0; bufvoid = 314572800 
    15/05/05 11:52:53 INFO mapred.MapTask: kvstart = 78643196; length = 19660800 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:55 INFO mapred.MapTask: Starting flush of map output 
    15/05/05 11:52:55 INFO mapred.MapTask: Spilling map output 
    15/05/05 11:52:55 INFO mapred.MapTask: bufstart = 0; bufend = 105296; bufvoid = 314572800 
    15/05/05 11:52:55 INFO mapred.MapTask: kvstart = 78643196(314572784); kvend = 78637988(314551952); length = 5209/19660800 
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map > map 
    15/05/05 11:52:55 INFO mapred.MapTask: Finished spill 0 
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map task executor complete. 
    15/05/05 11:52:55 WARN mapred.LocalJobRunner: job_local634564612_0001 
    java.lang.Exception: java.lang.OutOfMemoryError: Java heap space 
     at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
    Caused by: java.lang.OutOfMemoryError: Java heap space 
     at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208) 
     at net.ripe.hadoop.pcap.PcapReader.access$0(PcapReader.java:173) 
     at net.ripe.hadoop.pcap.PcapReader$PacketIterator.fetchNext(PcapReader.java:554) 
     at net.ripe.hadoop.pcap.PcapReader$PacketIterator.hasNext(PcapReader.java:559) 
     at net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57) 
     at net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue(CombineBinaryRecordReader.java:42) 
     at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69) 
     at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533) 
     at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) 
     at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) 
     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) 
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
     at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 
    15/05/05 11:52:56 INFO mapreduce.Job: Job job_local634564612_0001 failed with state FAILED due to: NA 
    15/05/05 11:52:56 INFO mapreduce.Job: Counters: 25 
     File System Counters 
      FILE: Number of bytes read=29002348 
      FILE: Number of bytes written=29450636 
      FILE: Number of read operations=0 
      FILE: Number of large read operations=0 
      FILE: Number of write operations=0 
      HDFS: Number of bytes read=103142 
      HDFS: Number of bytes written=0 
      HDFS: Number of read operations=6 
      HDFS: Number of large read operations=0 
      HDFS: Number of write operations=1 
     Map-Reduce Framework 
      Map input records=1303 
      Map output records=1303 
      Map output bytes=105296 
      Map output materialized bytes=0 
      Input split bytes=38078 
      Combine input records=0 
      Spilled Records=0 
      Failed Shuffles=0 
      Merged Map outputs=0 
      GC time elapsed (ms)=593 
      CPU time spent (ms)=0 
      Physical memory (bytes) snapshot=0 
      Virtual memory (bytes) snapshot=0 
      Total committed heap usage (bytes)=1745092608 
     File Input Format Counters 
      Bytes Read=0 

在這裏,我要送幾百個文件輸入到MapReduce工作和我使用默認的塊大小即64MB和我的內存大小是4GB,我在32位系統上使用hadoop。現在,我正面臨Java堆空間錯誤。如果我給數以百計的文件作爲MR作業的輸入,並以64MB作爲塊,那麼是否有任何解決方案可以解決此問題大小和使用CombineFileInputFormat和RAM 4GB。

請建議我在這個問題上...

+0

有多少個文件和多少個塊? –

+0

文件數量:318,Noblocks:1(defaultblocksize:64MB),Hadoop在32位系統上運行 –

回答

-1

至於邏輯去... ...分割大小不會導致Java堆空間錯誤。

它必須做一些與你的代碼邏輯一樣,對於給定的密鑰聚合內存中的太多數據。

你能否提供stackTrace作進一步分析

+0

'分割大小永遠不會導致Java堆空間錯誤'我不同意,CombineFileInputFormat通常是第一個原因耗盡內存 - 取決於輸入作業的文件數量。 –

+0

它與組合文件輸入格式無關,因爲fileinput格式只是選擇如何拆分以及如何讀取記錄(Record reader)。提供堆棧跟蹤將確認這一點, – KrazyGautam

+0

'CombineFileInputFormat'批處理文件,因此名稱。根據需要組合的文件數量,將文件一起批量處理需要相當多的RAM。 –

相關問題