2013-01-15 105 views
1

我正在寫一個hadoop作業,它處理許多文件並從每個文件創建多個文件。我正在使用「MultipleOutputs」來編寫它們。它適用於較小數量的文件,但我得到大量文件的以下錯誤。 MultipleOutputs.write(key,value,outputPath)引發異常; 我曾嘗試增加ulimit和-Xmx但無濟於事。Hadoop - MultipleOutputs.write - OutofMemory - Java堆空間

2013-01-15 13:44:05,154 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space 
    at org.apache.hadoop.hdfs.DFSOutputStream$Packet.<init>(DFSOutputStream.java:201) 
    at org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1423) 
    at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:161) 
    at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:136) 
    at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:125) 
    at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:116) 
    at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:90) 
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54) 
    at java.io.DataOutputStream.write(DataOutputStream.java:90) 
    at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter. writeObject(TextOutputFormat.java:78) 
    at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter. write(TextOutputFormat.java:99) 
    **at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:386) 
    at com.demoapp.collector.MPReducer.reduce(MPReducer.java:298) 
    at com.demoapp.collector.MPReducer.reduce(MPReducer.java:28)** 
    at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:164) 
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:595) 
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:433) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) 
    at org.apache.hadoop.mapred.Child.main(Child.java:262) 

任何想法?

回答

0

如果它不適用於大量文件,可能是因爲您已達到數據節點可以提供的最大文件數量。這可以通過hdfs-site.xml中名爲dfs.datanode.max.xcievers的屬性進行控制。

根據建議here,你應該它的價值碰撞的東西,可以讓你的工作正常運行,他們建議4096:

<property> 
    <name>dfs.datanode.max.xcievers</name> 
    <value>4096</value> 
</property> 
+0

我試圖使用來設置屬性 - 配置CONF = job.getConfiguration(); conf.set(「dfs.datanode.max.xcievers」,「4096」);並且執行了這項工作,但沒有任何區別。 – Harpreet

+0

@Harpreet這是一個datanode屬性,您需要將它放在hdfs-site.xml中並重新啓動羣集才能生效。 –