-1
我寫了一個簡單的代碼來讀取文本文件,並使用Hadoop Map reduce程序顯示與輸出相同的文本,在這裏我沒有使用reducer使用mapper來顯示輸入文本文件hadoop mapper程序中的空指針異常
我把映射爲
Mapper<LongWritable, Text, NullWritable, Text>
,但我想我只輸出值,我不希望看到鍵值
我在映射寫了代碼,但它顯示了NullPointerException異常
package com.demo.mr;
import java.io.IOException;
import org.apache.commons.io.output.NullWriter;
import org.apache.commons.lang.ObjectUtils.Null;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class DemoMR {
public static class MapperDemo extends
Mapper<LongWritable, Text, NullWritable, Text> {
//public NullWritable result = new NullWritable();
@Override
protected void map(LongWritable key, Text value,
org.apache.hadoop.mapreduce.Mapper.Context context)
throws IOException, InterruptedException {
context.write(Null,value);
}
}
public static void main(String args[]) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "Demo MR");
job.setJarByClass(DemoMR.class);
job.setMapperClass(MapperDemo.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path("/home/node1/WordCountInput.txt"));
FileOutputFormat.setOutputPath(job, new Path("/home/node1/output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
當我在關鍵位置使用NullWritable它顯示錯誤NullWritable無法解析爲變量 – 2014-09-05 09:52:27
你可以試試NullWritable.get()。 – 2014-09-05 10:03:16