嘗試運行一個Hadoop映射減少代碼但出現錯誤。不知道爲什麼......在Hadoop練習期間遇到錯誤
Hadoop的罐子BWC11.jar WordCountDriver 「/家/培訓/ training_material /數據/莎士比亞/喜劇片」 「/家/培訓/ training_material /數據/莎士比亞/ AWL」 警告:$ HADOOP_HOME已棄用。
Exception in thread "main" java.lang.NoClassDefFoundError: WordCountDriver (wrong name:
COM /菲利克斯/ Hadoop的/培訓/ WordCountDriver) 在java.lang.ClassLoader.defineClass1(本機方法) 在需要java.lang.ClassLoader.defineClass(ClassLoader.java:791) 是java。 security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access $ 100(URLClassLoader.java:71) at java.net .URLClassLoader $ 1.run(URLClassLoader.java:361) at java.net.URLClassLoader $ 1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivile ged(本地方法) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at sun.misc.Launcher $ AppClassLoader.loadClass(Launcher .java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) 在java.lang.Class.forName(Class.java:264) 在org.apache.hadoop.util.RunJar.main(RunJar.java:149) [培訓@本地BasicWordCount] $
有人可以幫忙嗎?我出了這個?
驅動程序代碼:
package com.felix.hadoop.training;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class WordCountDriver extends Configured implements Tool{
public static void main(String[] args) throws Exception
{
ToolRunner.run(new WordCountDriver(),args);
}
@Override
public int run(String[] args) throws Exception {
Job job = new Job(getConf(),"Basic Word Count Job");
job.setJarByClass(WordCountDriver.class);
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setNumReduceTasks(1);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
return 0;
}
}
映射代碼:
package com.felix.hadoop.training;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
/**
*
* @author training
* Class : WordCountMapper
*
*/
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
/**
* Optimization: Instead of creating the variables in the
*/
@Override
public void map(LongWritable inputKey,Text inputVal,Context context) throws IOException,InterruptedException
{
String line = inputVal.toString();
String[] splits = line.trim().split("\\W+");
for(String outputKey:splits)
{
context.write(new Text(outputKey), new IntWritable(1));
}
}
}
減速代碼:
package com.felix.hadoop.training;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class WordCountReducer extends Reducer<Text,IntWritable,Text, IntWritable>{
@Override
public void reduce(Text key,Iterable<IntWritable> listOfValues,Context context) throws IOException,InterruptedException
{
int sum=0;
for(IntWritable val:listOfValues)
{
sum = sum + val.get();
}
context.write(key,new IntWritable(sum));
}
}
不知道爲什麼我收到這個錯誤.. 我已嘗試添加class path
,將類文件複製到.jar文件所在的位置等......但無濟於事。
可能重複的[如何解決java.lang.NoClassDefFoundError?](http://stackoverflow.com/questions/17973970/how-to-solve-java-lang-noclassdeffounderror) – GabrielOshiro