2011-04-07 102 views
7

我遇到了一個奇怪的問題。當我運行在一個大的數據集我的Hadoop作業(> 1TB壓縮文本文件),數的減少任務失敗,有這樣的蹤跡:Hadoop:中間合併失敗

java.io.IOException: Task: attempt_201104061411_0002_r_000044_0 - The reduce copier failed 
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:240) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) 
    at org.apache.hadoop.mapred.Child.main(Child.java:234) 
Caused by: java.io.IOException: Intermediate merge failed 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639) 
Caused by: java.lang.RuntimeException: java.io.EOFException 
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128) 
    at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373) 
    at org.apache.hadoop.util.PriorityQueue.downHeap(PriorityQueue.java:139) 
    at org.apache.hadoop.util.PriorityQueue.adjustTop(PriorityQueue.java:103) 
    at org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:335) 
    at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350) 
    at org.apache.hadoop.mapred.Merger.writeFile(Merger.java:156) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2698) 
    ... 1 more 
Caused by: java.io.EOFException 
    at java.io.DataInputStream.readInt(DataInputStream.java:375) 
    at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:241) 
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:125) 
    ... 8 more 
java.io.IOException: Task: attempt_201104061411_0002_r_000056_0 - The reduce copier failed 
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:240) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) 
    at org.apache.hadoop.mapred.Child.main(Child.java:234) 
Caused by: java.io.IOException: Intermediate merge failed 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2714) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2639) 
Caused by: java.lang.RuntimeException: java.io.EOFException 
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128) 
    at org.apache.hadoop.mapred.Merger$MergeQueue.lessThan(Merger.java:373) 
    at org.apache.hadoop.util.PriorityQueue.upHeap(PriorityQueue.java:123) 
    at org.apache.hadoop.util.PriorityQueue.put(PriorityQueue.java:50) 
    at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:447) 
    at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:381) 
    at org.apache.hadoop.mapred.Merger.merge(Merger.java:107) 
    at org.apache.hadoop.mapred.Merger.merge(Merger.java:93) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2689) 
    ... 1 more 
Caused by: java.io.EOFException 
    at java.io.DataInputStream.readFully(DataInputStream.java:180) 
    at org.apache.hadoop.io.Text.readString(Text.java:402) 
    at com.__.hadoop.pixel.segments.IpCookieCountFilter$IpAndIpCookieCount.readFields(IpCookieCountFilter.java:240) 
    at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:122) 
    ... 9 more 

不是我一個人的減速器失敗。在我看到別人失敗之前,有幾個人往往成功。正如你所看到的,堆棧跟蹤始終似乎始於IPAndIPCookieCount.readFields(),並且始終處於內存合併階段,但並不總是來自readFields的相同部分。

當在較小的數據集上運行(約爲大小的1/30)時,此作業成功。作業的輸入量幾乎與輸出量一樣多,但每個輸出記錄都較短。這項工作本質上是第二類的實施。

我們正在使用CDH3 Hadoop發行版。

這裏是我的自定義WritableComparable實現:

public static class IpAndIpCookieCount implements WritableComparable<IpAndIpCookieCount> { 

     private String ip; 
     private int ipCookieCount; 

     public IpAndIpCookieCount() { 
      // empty constructor for hadoop 
     } 

     public IpAndIpCookieCount(String ip, int ipCookieCount) { 
      this.ip = ip; 
      this.ipCookieCount = ipCookieCount; 
     } 

     public String getIp() { 
      return ip; 
     } 

     public int getIpCookieCount() { 
      return ipCookieCount; 
     } 

     @Override 
     public void readFields(DataInput in) throws IOException { 
      ip = Text.readString(in); 
      ipCookieCount = in.readInt(); 
     } 

     @Override 
     public void write(DataOutput out) throws IOException { 
      Text.writeString(out, ip); 
      out.writeInt(ipCookieCount); 
     } 

     @Override 
     public int compareTo(IpAndIpCookieCount other) { 
      int firstComparison = ip.compareTo(other.getIp()); 
      if (firstComparison == 0) { 
       int otherIpCookieCount = other.getIpCookieCount(); 
       if (ipCookieCount == otherIpCookieCount) { 
        return 0; 
       } else { 
        return ipCookieCount < otherIpCookieCount ? 1 : -1; 
       } 
      } else { 
       return firstComparison; 
      } 
     } 

     @Override 
     public boolean equals(Object o) { 
      if (o instanceof IpAndIpCookieCount) { 
       IpAndIpCookieCount other = (IpAndIpCookieCount) o; 
       return ip.equals(other.getIp()) && ipCookieCount == other.getIpCookieCount(); 
      } else { 
       return false; 
      } 
     } 

     @Override 
     public int hashCode() { 
      return ip.hashCode()^ipCookieCount; 
     } 

    } 

readFields方法很簡單,我看不出在這個類的任何問題。另外,我看到其他人基本上得到同樣的堆棧跟蹤:

無似乎已經真正想通了這背後的問題。最後兩個似乎表明這可能是一個內存問題(儘管這些堆棧跟蹤不是OutOfMemoryException)。就像鏈接列表中倒數第二個帖子一樣,我試圖設置更高的縮減器數量(最多999),但我仍然失敗。我還沒有嘗試分配更多內存來減少任務,因爲這需要我們重新配置集羣。

這是Hadoop中的錯誤嗎?或者我做錯了什麼?

編輯:我的數據按天劃分。如果我運行7次,每天運行一次,全部7次完成。如果我在7天內完成一項工作,則失敗。整個7天的大報告將看到完全相同的密鑰,與較小的一樣(總計),但顯然不是相同的順序,在相同的減速器等。

+0

我也遇到過這個問題,但它並不是一直出現的。在相同的數據集上,運行在相同的條件下,有時我的作業會失敗,並且'WritableComparable.readFields(..)'方法中的'readByte()'方法導致'EOFException'。我認爲這可能是導致某種延遲的網絡問題。 – 2012-08-15 09:52:30

回答

1

我認爲這是Cloudera的一個神器向CDH3回送MAPREDUCE-947。這個補丁導致爲成功的作業形成_SUCCESS文件。

此外,還會在輸出文件夾中爲成功作業創建_SUCCESS文件。可以將配置參數mapreduce.fileoutputcommitter.marksuccessfuljobs設置爲false以禁用_SUCCESS文件的創建,或設置爲true以啓用_SUCCESS文件的創建。

看你的錯誤,

Caused by: java.io.EOFException 
    at java.io.DataInputStream.readFully(DataInputStream.java:180) 

並將它與我已經看到了這個問題之前的錯誤,

Exception in thread "main" java.io.EOFException 
    at java.io.DataInputStream.readFully(DataInputStream.java:180) 
    at java.io.DataInputStream.readFully(DataInputStream.java:152) 
    at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1465) 
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1437) 
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424) 
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419) 
    at org.apache.hadoop.mapred.SequenceFileOutputFormat.getReaders(SequenceFileOutputFormat.java:89) 
    at org.apache.nutch.crawl.CrawlDbReader.processStatJob(CrawlDbReader.java:323) 
    at org.apache.nutch.crawl.CrawlDbReader.main(CrawlDbReader.java:511) 

,並在Mahout mailing list

Exception in thread "main" java.io.EOFException 
    at java.io.DataInputStream.readFully(DataInputStream.java:180) 
    at java.io.DataInputStream.readFully(DataInputStream.java:152) 
    at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1457) 
    at 
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1435) 
    at 
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424) 
    at 
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419) 
    at 
org.apache.mahout.df.mapreduce.partial.Step0Job.parseOutput(Step0Job.java:145) 
    at 
org.apache.mahout.df.mapreduce.partial.Step0Job.run(Step0Job.java:119) 
    at 
org.apache.mahout.df.mapreduce.partial.PartialBuilder.parseOutput(PartialBuilder.java:115) 
    at org.apache.mahout.df.mapreduce.Builder.build(Builder.java:338) 
    at 
org.apache.mahout.df.mapreduce.BuildForest.buildForest(BuildForest.java:195) 

看來DataInputStream.readFully是chok由此文件編輯。

我會建議將mapreduce.fileoutputcommitter.marksuccessfuljobs設置爲false並重試您的工作 - 它應該工作。

+0

我不認爲這是問題所在。爲什麼在內存合併期間會讀取_SUCCESS文件?它甚至不應該被創建。不過,它似乎確實是Cloudera特有的。我們創建了一個新的Hadoop集羣,並且不再出現這個錯誤。 – ajduff574 2011-05-19 18:00:21

+0

另外,此作業沒有在另一個作業的輸出上運行,所以輸入中沒有_SUCCESS文件。 – ajduff574 2011-05-19 18:02:44

+0

您是否嘗試設置參數? – viksit 2011-05-21 18:27:39