2012-04-02 33 views
1

我使用此代碼來運行字數hadoop作業。當我使用hadoop eclipse插件從eclipse內部運行WordCountDriver時,WordCountDriver會運行。當我將mapper和reducer類打包爲jar並將其放到classpath中時,WordCountDriver也從命令行運行。從無代碼的java代碼調用hadoop作業

但是,如果我嘗試從命令行運行它,但未將類映射器和Reducer類作爲jar添加到類路徑,它會失敗,儘管我將兩個類都添加到類路徑中。我想知道在hadoop中是否存在一些限制,因爲它接受映射器& reducer類作爲普通的類文件。創建一個jar永遠是強制性的?

public class WordCountDriver extends Configured implements Tool {

public static final String HADOOP_ROOT_DIR = "hdfs://universe:54310/app/hadoop/tmp"; 


static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> { 

    private Text word = new Text(); 
    private final IntWritable one = new IntWritable(1); 

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { 

     String line = value.toString(); 
     StringTokenizer itr = new StringTokenizer(line.toLowerCase()); 
     while (itr.hasMoreTokens()) { 
      word.set(itr.nextToken()); 
      context.write(word, one); 
     } 
    } 
}; 

static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { 

    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { 

     int sum = 0; 

     for (IntWritable value : values) { 
      sum += value.get(); // process value 
     }  
     context.write(key, new IntWritable(sum)); 
    } 
}; 


/** 
* 
*/ 
public int run(String[] args) throws Exception { 

    Configuration conf = getConf(); 

    conf.set("mapred.job.tracker", "universe:54311"); 

    Job job = new Job(conf, "Word Count"); 

    // specify output types 
    job.setOutputKeyClass(Text.class); 
    job.setOutputValueClass(IntWritable.class); 

    // specify input and output dirs 
    FileInputFormat.addInputPath(job, new Path(HADOOP_ROOT_DIR + "/input")); 
    FileOutputFormat.setOutputPath(job, new Path(HADOOP_ROOT_DIR + "/output")); 

    // specify a mapper 
    job.setMapperClass(WordCountDriver.WordCountMapper.class); 

    // specify a reducer 
    job.setReducerClass(WordCountDriver.WordCountReducer.class); 
    job.setCombinerClass(WordCountDriver.WordCountReducer.class); 

    job.setJarByClass(WordCountDriver.WordCountMapper.class); 

    return job.waitForCompletion(true) ? 0 : 1; 
} 

/** 
* 
* @param args 
* @throws Exception 
*/ 
public static void main(String[] args) throws Exception { 
    int res = ToolRunner.run(new Configuration(), new WordCountDriver(), args); 
    System.exit(res); 
} 

}

回答

1

這並不完全清楚其classpath中你指的是,但最終,如果你是一個遠程 Hadoop集羣上運行,你需要提供一個JAR文件中的所有類,在執行hadoop jar期間發送到Hadoop。你的本地程序的類路徑是不相關的。

它可能在本地工作,因爲您實際上在那裏的本地進程中運行Hadoop實例。所以,在這種情況下,恰好能夠在本地程序的類路徑中找到這些類。

+0

我的司機類是本地和Hadoop是設置爲1個節點的集羣:使用與GenericOptionsParser的-libjars選項工作的classpath – cosmos 2012-04-02 17:28:39