2014-10-03 45 views
0

我是Hadoop的新手,並試圖運行Map Reduce程序,它的字數很高,我得到以下錯誤了java.lang.RuntimeException:拋出java.lang.ClassNotFoundException:wordcount_classes.WordCount $地圖和WordCount.java

import java.io.IOException; 
import java.util.*; 

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.conf.*; 
import org.apache.hadoop.io.*; 
import org.apache.hadoop.mapreduce.*; 
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; 
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; 

public class WordCount { 

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { 
    private final static IntWritable one = new IntWritable(1); 
    private Text word = new Text(); 

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { 
     String line = value.toString(); 
     StringTokenizer tokenizer = new StringTokenizer(line); 
     while (tokenizer.hasMoreTokens()) { 
      word.set(tokenizer.nextToken()); 
      context.write(word, one); 
     } 
    } 
} 

public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { 

    public void reduce(Text key, Iterable<IntWritable> values, Context context) 
    throws IOException, InterruptedException { 
     int sum = 0; 
     for (IntWritable val : values) { 
      sum += val.get(); 
     } 
     context.write(key, new IntWritable(sum)); 
    } 
} 

public static void main(String[] args) throws Exception { 
    Configuration conf = new Configuration(); 

     Job job = new Job(conf, "wordcount"); 

    job.setOutputKeyClass(Text.class); 
    job.setOutputValueClass(IntWritable.class); 

    job.setMapperClass(Map.class); 
    job.setReducerClass(Reduce.class); 

    job.setInputFormatClass(TextInputFormat.class); 
    job.setOutputFormatClass(TextOutputFormat.class); 

    FileInputFormat.addInputPath(job, new Path(args[0])); 
    FileOutputFormat.setOutputPath(job, new Path(args[1])); 
    job.setJarByClass(WordCount.class);  
    job.waitForCompletion(true); 
} 

wordcount_classes目錄的內容是

-rw-r--r-- 1 sagar supergroup  1855 2014-10-03 13:15 /user/sagar /wordcount_classes/WordCount$Map.class 
-rw-r--r-- 1 sagar supergroup  1627 2014-10-03 13:15 /user/sagar/wordcount_classes/WordCount$Reduce.class 
-rw-r--r-- 1 sagar supergroup  1453 2014-10-03 13:14 /user/sagar/wordcount_classes/WordCount.class 
-rw-r--r-- 1 sagar supergroup  3109 2014-10-03 13:15 /user/sagar/wordcount_classes/wordcount.jar 

}

,我用下面的命令編譯程序

hadoop jar wordcount_classes/wordcount.jar wordcount_classes/WordCount input r1 

回答

1

請檢查以下內容:

  1. 你編譯作爲運行的JAR
  2. 你從文件夾運行jar包含 或
  3. 使用以下命令運行

    hadoop jar <path_to_jar>/wordcount.jar WordCount <hdfs_path_to_input>/input <hdfpath>/r1 
    
+0

這些輸入和輸出目錄應該位於hdfs文件或其他位置? – Sagar 2014-10-03 11:06:44

+0

輸入應該是位於hdfs中的輸入源文件,輸出是hdfs中的目錄,但不應該創建,輸出將由程序本身創建。如果您的問題已修復,請告知我 – 2014-10-03 11:08:30

+0

如何知道在hdfs中創建的目錄的文件系統路徑? – Sagar 2014-10-03 11:14:35