2010-09-19 87 views
-1

我有一個簡單地圖,減少程序,其中我的地圖和降低原語看起來像這樣的hadoop +可寫接口+閱讀字段拋出在減速器異常

地圖(K,V)=(文字,OutputAggregator)
減少(文本,OutputAggregator)=(文本,文本)

重要的一點是,從我的地圖函數,我發出一個OutputAggregator類型的對象,這是我自己的類,實現了Writable接口。但是,我的減少失敗,出現以下例外。更具體地說,readFieds()函數拋出異常。任何線索爲什麼?我使用hadoop 0.18.3

10/09/19 04:04:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 
10/09/19 04:04:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1 
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1 
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1 
10/09/19 04:04:59 INFO mapred.FileInputFormat: Total input paths to process : 1 
10/09/19 04:04:59 INFO mapred.JobClient: Running job: job_local_0001 
10/09/19 04:04:59 INFO mapred.MapTask: numReduceTasks: 1 
10/09/19 04:04:59 INFO mapred.MapTask: io.sort.mb = 100 
10/09/19 04:04:59 INFO mapred.MapTask: data buffer = 79691776/99614720 
10/09/19 04:04:59 INFO mapred.MapTask: record buffer = 262144/327680 
Length = 10 
10 
10/09/19 04:04:59 INFO mapred.MapTask: Starting flush of map output 
10/09/19 04:04:59 INFO mapred.MapTask: bufstart = 0; bufend = 231; bufvoid = 99614720 
10/09/19 04:04:59 INFO mapred.MapTask: kvstart = 0; kvend = 10; length = 327680 
gl_books 
10/09/19 04:04:59 WARN mapred.LocalJobRunner: job_local_0001 
java.lang.NullPointerException 
at org.myorg.OutputAggregator.readFields(OutputAggregator.java:46) 
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67) 
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40) 
at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:751) 
at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:691) 
at org.apache.hadoop.mapred.Task$CombineValuesIterator.next(Task.java:770) 
at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:117) 
at org.myorg.xxxParallelizer$Reduce.reduce(xxxParallelizer.java:1) 
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.combineAndSpill(MapTask.java:904) 
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:785) 
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:698) 
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:228) 
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:157) 
java.io.IOException: Job failed! 
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1113) 
at org.myorg.xxxParallelizer.main(xxxParallelizer.java:145) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) 
at java.lang.reflect.Method.invoke(Unknown Source) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:155) 
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) 
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) 
+1

發佈了OutputAggregator.readFields()的代碼。第46行是什麼? – bajafresh4life 2010-09-20 01:18:52

回答

2

發佈有關自定義代碼的問題時:發佈相關代碼段。所以線46和&前幾行後,將真正幫助的內容... :)

不過,這可能幫助:

陷阱寫自己可寫的類時是Hadoop的重用的事實一遍又一遍的實際類的實例。在調用readFields之間,你不會得到一個閃亮的新實例。

因此,在readFields方法開始時,您必須假定您所在的對象填充了「垃圾」,並且在繼續之前必須清除。

我給你的建議是實現一個「clear()」方法,它完全擦除當前實例並將其重置爲創建完成並構造函數完成後的狀態。當然,您將該方法作爲您的readField中的鍵和值的第一件事。

HTH

0

除了尼爾斯Basjes答案:只要初始化空的構造函數內的成員變量(你必須提供,否則的Hadoop不能初始化你的對象),例如:

public OutputAggregator() { 
    this.member = new IntWritable(); 
    ... 
} 

假設this.memberIntWritable類型。