2012-11-04 30 views
0

我有以下映射器類。我想在我的映射函數中寫入hdfs。所以我需要acces來配置我在setup()方法中檢索的對象。但它被返回爲空,我正在得到一個NPE。你能讓我知道我做錯了什麼嗎?配置對象在hadoop映射器中爲空

這裏是堆棧跟蹤

[email protected]:/usr/local/hadoop/hadoop-1.0.4$ bin/hadoop jar GWASMapReduce.jar /user/hduser/tet.gpg /user/hduser/output3 
12/11/04 08:50:17 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
12/11/04 08:50:24 INFO mapred.FileInputFormat: Total input paths to process : 1 
12/11/04 08:50:28 INFO mapred.JobClient: Running job: job_201211031924_0008 
12/11/04 08:50:29 INFO mapred.JobClient: map 0% reduce 0% 
12/11/04 08:51:35 INFO mapred.JobClient: Task Id : attempt_201211031924_0008_m_000000_0, Status : FAILED 
java.lang.NullPointerException 
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:131) 
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) 
at com.test.GWASMapper.writeCsvFileSmry(GWASMapper.java:208) 
at com.test.GWASMapper.checkForNulls(GWASMapper.java:153) 
at com.test.GWASMapper.map(GWASMapper.java:51) 
at com.test.GWASMapper.map(GWASMapper.java:1) 
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) 
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) 
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) 
at org.apache.hadoop.mapred.Child$4.run(Child.java:255) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at  org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) 
at org.apache.hadoop.mapred.Child.main(Child.java:249) 

attempt_201211031924_0008_m_000000_0: ****************************************************************************************************************************************************************************************** 
attempt_201211031924_0008_m_000000_0: null 
attempt_201211031924_0008_m_000000_0: ****************************************************************************************************************************************************************************************** 
12/11/04 08:51:37 INFO mapred.JobClient: Task Id :  attempt_201211031924_0008_m_000001_0, Status : FAILED 

這裏是我的驅動程序類

import org.apache.hadoop.conf.Configuration; 
import org.apache.hadoop.conf.Configured; 
import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapred.FileInputFormat; 
import org.apache.hadoop.mapred.FileOutputFormat; 
import org.apache.hadoop.mapred.JobClient; 
import org.apache.hadoop.mapred.JobConf; 
import org.apache.hadoop.util.Tool; 
import org.apache.hadoop.util.ToolRunner; 

public class GWASMapReduce extends Configured implements Tool{ 

/** 
* @param args 
*/ 
public static void main(String[] args) throws Exception { 
    Configuration configuration = new Configuration(); 
    ToolRunner.run(configuration, new GWASMapReduce(), args); 
} 

@Override 
public int run(String[] arg0) throws Exception { 
    JobConf conf = new JobConf(); 
    conf.setInputFormat(GWASInputFormat.class); 
    conf.setOutputKeyClass(Text.class); 
    conf.setOutputValueClass(Text.class); 
    conf.setJarByClass(GWASMapReduce.class); 
    conf.setMapperClass(GWASMapper.class); 
    conf.setNumReduceTasks(0); 
    FileInputFormat.addInputPath(conf, new Path(arg0[0])); 
    FileOutputFormat.setOutputPath(conf, new Path(arg0[1])); 
    JobClient.runJob(conf); 
    return 0; 
} 
} 

映射類

import java.io.IOException; 
import java.util.ArrayList; 
import java.util.Collections; 
import java.util.List; 
import java.util.Map; 
import java.util.TreeMap; 

import org.apache.hadoop.conf.Configuration; 
import org.apache.hadoop.fs.FSDataOutputStream; 
import org.apache.hadoop.fs.FileSystem; 
import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.io.LongWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapred.FileSplit; 
import org.apache.hadoop.mapred.MapReduceBase; 
import org.apache.hadoop.mapred.Mapper; 
import org.apache.hadoop.mapred.OutputCollector; 
import org.apache.hadoop.mapred.Reporter; 

import com.google.common.base.Strings; 

public class GWASMapper extends MapReduceBase implements Mapper<LongWritable, GWASGenotypeBean, Text, Text> { 

private static Configuration conf; 


@SuppressWarnings("rawtypes") 
public void setup(org.apache.hadoop.mapreduce.Mapper.Context context) throws IOException { 

    conf = context.getConfiguration(); 
    // conf is null here 
} 


@Override 
public void map(LongWritable inputKey, GWASGenotypeBean inputValue, OutputCollector<Text, Text> output, Reporter reporter) throws IOException { 
    // mapper code 
} 


} 
+0

你可以請你發佈'NullPointerException'的堆棧跟蹤嗎? –

+0

謝謝,爲「fs.default.name」配置了什麼?不要以爲你的配置在那裏是空的。 –

+0

傳入的輸入參數(文件名)是什麼? –

回答

1

我覺得你r短缺這個

JobClient jobClient = new JobClient(); 
client.setConf(conf); 

JobClient.runJob(conf); 

conf參數未傳遞給作業客戶端。試試看看是否有幫助

而且我建議使用新的mapreduce librbary。檢查單詞的V2.0算 http://hadoop.apache.org/docs/mapreduce/r0.22.0/mapred_tutorial.html#Example%3A+WordCount+v2.0

而且還 試試這個JobConf job = new JobConf(new Configuration());

我覺得配置對象不會在這裏初始化。 另外你沒有什麼特別的配置對象,所以你可以在映射器中初始化配置對象,雖然這不是一個好的做法,只是試試

+0

我試過了,它仍然拋出相同的NPE – user1707141

+0

試試這個JobConf job = new JobConf(new Configuration()); 我認爲這裏的配置對象沒有初始化。 而且你沒有什麼特別的配置對象,所以你可以初始化映射器中的配置對象也 – javanx

+0

根據你的建議,我在mapper類中創建了一個新的配置對象,並且我擺脫了NPE。但我想知道爲什麼我無法使用context.getConfiguration() – user1707141

1

這只是針對其他人面臨類似問題的提示:
請確保您先設置值並聲明作業。

例如:

Configuration conf = new Configuration(); 
conf.set("a","2"); 
conf.set("inputpath",args[0]); 
//Must be set before the below line: 
Job myjob = new Job(conf); 

希望這有助於。