2013-10-04 94 views
3

我遇到了一個Hadoop Map/Reduce作業的奇怪問題。作業提交正確,運行,但產生不正確/奇怪的結果。看起來好像映射器和減速器根本不運行。輸入文件被變換從:簡單的字數MapReduce示例產生奇怪的結果

12 
16 
132 
654 
132 
12 

0 12 
4 16 
8 132 
13 654 
18 132 
23 12 

我假定第一列中的映射器之前對所生成的密鑰,但既不映射器也不減速似乎運行。當我使用舊的API時,這項工作運行良好。

工作來源如下。我使用Hortonworks作爲平臺。

public class HadoopAnalyzer 
{ 
    public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> 
    { 
     private final static IntWritable one = new IntWritable(1); 
     private Text word = new Text(); 

     @Override 
     public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException 
     { 
      String line = value.toString(); 
      StringTokenizer tokenizer = new StringTokenizer(line); 
      while (tokenizer.hasMoreTokens()) 
      { 
       word.set(tokenizer.nextToken()); 
       context.write(word, one); 
      } 
     } 
    } 

    public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> 
    { 
     @Override 
     public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException 
     { 
      int sum = 0; 
      for (IntWritable val : values) 
      { 
       sum += val.get(); 
      } 
      context.write(key, new IntWritable(sum)); 
     } 
    } 

    public static void main(String[] args) throws Exception 
    { 
     JobConf conf = new JobConf(HadoopAnalyzer.class); 
     conf.setJobName("wordcount"); 
     conf.set("mapred.job.tracker", "192.168.229.128:50300"); 
     conf.set("fs.default.name", "hdfs://192.168.229.128:8020"); 
     conf.set("fs.defaultFS", "hdfs://192.168.229.128:8020"); 
     conf.set("hbase.master", "192.168.229.128:60000"); 
     conf.set("hbase.zookeeper.quorum", "192.168.229.128"); 
     conf.set("hbase.zookeeper.property.clientPort", "2181"); 
     System.out.println("Executing job."); 
     Job job = new Job(conf, "job"); 
     job.setInputFormatClass(InputFormat.class); 
     job.setOutputFormatClass(OutputFormat.class); 
     job.setJarByClass(HadoopAnalyzer.class); 
     job.setInputFormatClass(TextInputFormat.class); 
     job.setOutputFormatClass(TextOutputFormat.class); 
     TextInputFormat.addInputPath(job, new Path("/user/usr/in")); 
     TextOutputFormat.setOutputPath(job, new Path("/user/usr/out")); 
     job.setMapperClass(Mapper.class); 
     job.setReducerClass(Reducer.class); 
     job.waitForCompletion(true); 
     System.out.println("Done."); 
    } 
} 

也許我錯過了一些明顯的東西,但任何人都可以對這裏可能出現的問題有所瞭解嗎?

+0

第一個數據集 - 是輸入還是預期輸出? –

+0

這是輸入。 –

回答

3

如預期的輸出,因爲您使用以下,

job.setMapperClass(Mapper.class); 
job.setReducerClass(Reducer.class); 

本應是 -

job.setMapperClass(Map.class); 
job.setReducerClass(Reduce.class); 

您擴展映射器和減速班,Map和Reduce但沒有在你的工作中使用它們。

+2

謝謝。不過,我不相信我那麼做過。 :) –

+1

有時它發生在偉大的編碼器! :) –