2015-04-04 21 views
5

到目前爲止還沒有看到我的特定問題的解決方案。至少它不工作。它讓我非常瘋狂。這個特別的組合在谷歌空間似乎沒有很多。我的錯誤發生在工作從我所知道的映射器中進入映射器時。這個工作的輸入是avro模式的輸出,雖然我嘗試了未壓縮,但是用deflate壓縮。找到的接口org.apache.hadoop.mapreduce.TaskAttemptContext

的Avro:1.7.7 的Hadoop:2.4.1

我得到這個錯誤,我不知道爲什麼。這是我的工作,mapper和減少。當映射進來的錯誤是發生

樣品未壓縮的Avro輸入文件(StockReport.SCHEMA這樣定義)

{"day": 3, "month": 2, "year": 1986, "stocks": [{"symbol": "AAME", "timestamp": 507833213000, "dividend": 10.59}]} 

工作

@Override 
public int run(String[] strings) throws Exception { 
    Job job = Job.getInstance(); 
    job.setJobName("GenerateGraphsJob"); 
    job.setJarByClass(GenerateGraphsJob.class); 

    configureJob(job); 

    int resultCode = job.waitForCompletion(true) ? 0 : 1; 

    return resultCode; 
} 

private void configureJob(Job job) throws IOException { 
    try { 
     Configuration config = getConf(); 
     Path inputPath = ConfigHelper.getChartInputPath(config); 
     Path outputPath = ConfigHelper.getChartOutputPath(config); 

     job.setInputFormatClass(AvroKeyInputFormat.class); 
     AvroKeyInputFormat.addInputPath(job, inputPath); 
     AvroJob.setInputKeySchema(job, StockReport.SCHEMA$); 


     job.setMapperClass(StockAverageMapper.class); 
     job.setCombinerClass(StockAverageCombiner.class); 
     job.setReducerClass(StockAverageReducer.class); 

     FileOutputFormat.setOutputPath(job, outputPath); 

    } catch (IOException | ClassCastException e) { 
     LOG.error("An job error has occurred.", e); 
    } 
} 

映射:

public class StockAverageMapper extends 
     Mapper<AvroKey<StockReport>, NullWritable, StockYearSymbolKey, StockReport> { 
    private static Logger LOG = LoggerFactory.getLogger(StockAverageMapper.class); 

private final StockReport stockReport = new StockReport(); 
private final StockYearSymbolKey stockKey = new StockYearSymbolKey(); 

@Override 
protected void map(AvroKey<StockReport> inKey, NullWritable ignore, Context context) 
     throws IOException, InterruptedException { 
    try { 
     StockReport inKeyDatum = inKey.datum(); 
     for (Stock stock : inKeyDatum.getStocks()) { 
      updateKey(inKeyDatum, stock); 
      updateValue(inKeyDatum, stock); 
      context.write(stockKey, stockReport); 
     } 
    } catch (Exception ex) { 
     LOG.debug(ex.toString()); 
    } 
} 

地圖輸出鍵的模式:

{ 
    "namespace": "avro.model", 
    "type": "record", 
    "name": "StockYearSymbolKey", 
    "fields": [ 
    { 
     "name": "year", 
     "type": "int" 
    }, 
    { 
     "name": "symbol", 
     "type": "string" 
    } 
    ] 
} 

堆棧跟蹤:

java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected 
    at org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyInputFormat.java:47) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:492) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:735) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

編輯:這不是問題,但我的工作,以減少這個數據我可以從創建的JFreeChart輸出。沒有通過映射器,所以不應該關聯。

回答

6

問題是,org.apache.hadoop.mapreduce.TaskAttemptContext是一個class in Hadoop 1,但成爲interface in Hadoop 2

這是爲什麼依賴Hadoop庫的庫需要爲Hadoop 1和Hadoop 2分別編譯jar文件的原因之一。基於你的堆棧跟蹤,看起來你有一個Hadoop1編譯的Avro jarfile,儘管使用Hadoop 2.4.1運行。

download mirrors for Avroavro-mapred-1.7.7-hadoop1.jaravro-mapred-1.7.7-hadoop2.jar提供了很好的單獨下載。

+0

我會試試看。這些編譯的Avro類與我的其他工作一起工作。它只是在這個使用共享庫的工作中。我的pom有1.7.7 avro-mapred,avro-tools和avro。我用一個名爲avro-tools-1.7.7.jar的jar手動編譯avro模式。 – Rig 2015-04-05 02:20:24

+0

你釘了它。謝謝。 – Rig 2015-04-05 14:09:43

1

問題是Avro 1.7.7支持2個版本的Hadoop,因此取決於兩個Hadoop版本。默認情況下,Avro 1.7.7 jar依賴於舊的Hadoop版本。 要建立與的Avro 1.7.7Hadoop2只需添加額外的classifier行Maven依賴:

<dependency> 
     <groupId>org.apache.avro</groupId> 
     <artifactId>avro-mapred</artifactId> 
     <version>1.7.7</version> 
     <classifier>hadoop2</classifier> 
    </dependency> 

這將告訴Maven來尋找avro-mapred-1.7.7-hadoop2.jar,不avro-mapred-1.7.7.jar

同樣適用於Avro的1.7 .4以上

相關問題