2011-08-18 95 views
1

在運行Map-reduce程序時出現以下錯誤。運行Mapreduce程序時出錯

The program is to sort the o/p using TotalOrderpartition. 

I have 2 node cluster. 
when i run teh program with -D mapred.reduce.tasks=2 its working fine 
But its failing with below error while running with -D mapred.reduce.tasks=3 option. 


java.lang.RuntimeException: Error in configuring object 
     at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93) 
     at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) 
     at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) 
     at org.apache.hadoop.mapred.MapTask$OldOutputCollector.<init>(MapTask.java:448) 
     at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358) 
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) 
     at org.apache.hadoop.mapred.Child.main(Child.java:170) 
Caused by: java.lang.reflect.InvocationTargetException 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
     at java.lang.reflect.Method.invoke(Method.java:597) 
     at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) 
     ... 6 more 
Caused by: java.lang.IllegalArgumentException: Can't read partitions file 
     at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:91) 
     ... 11 more 
Caused by: java.io.IOException: Split points are out of order 
     at org.apache.hadoop.mapred.lib.TotalOrderPartitioner.configure(TotalOrderPartitioner.java:78) 
     ... 11 more 

Plese let me know whats wrong here? 

Thanks 
R 

回答

1

聽起來就像您在分區文件中沒有足夠的密鑰。 docs表示TotalOrderpartitioner要求您的分區SequenceFile中至少有N-1個鍵,其中N是reducer的數量。

+0

如果你打算downvote,至少給出原因。對於原問題,這是一個完全有效的答案。 – cftarnas

2

可以提及的reducer的最大數量等於羣集中節點的數量。由於這裏的節點數是2,所以你不能設置reducer的數量大於2.

+0

有點如果我嘗試運行0減速器它的工作。減速機的原因是什麼取決於節點的數量? –

0

我也遇到過這個問題,通過檢查soucecode發現,因爲sample,增加減少的數目使得在分割點有相同的元素,所以拋出這個錯誤。它與數據有關係。鍵入hadoop fs - text _partition查看生成分區的文件,如果您的任務失敗,則必須具有相同的元素。