2014-10-17 40 views
1

我在嘗試運行我的地圖縮減程序。之後我嘗試運行它的輸出如下。 (我顯示輸出的最後部分只)map reduce程序顯示錯誤線程「main」中的異常java.io.IOException:作業失敗

File System Counters 
    FILE: Number of bytes read=3052 
    FILE: Number of bytes written=224295 
    FILE: Number of read operations=0 
    FILE: Number of large read operations=0 
    FILE: Number of write operations=0 
    HDFS: Number of bytes read=0 
    HDFS: Number of bytes written=0 
    HDFS: Number of read operations=5 
    HDFS: Number of large read operations=0 
    HDFS: Number of write operations=1 
Map-Reduce Framework 
    Map input records=4 
    Map output records=4 
    Map output bytes=120 
    Map output materialized bytes=0 
    Input split bytes=97 
    Combine input records=0 
    Combine output records=0 
    Spilled Records=0 
    Failed Shuffles=0 
    Merged Map outputs=0 
    GC time elapsed (ms)=40 
    CPU time spent (ms)=0 
    Physical memory (bytes) snapshot=0 
    Virtual memory (bytes) snapshot=0 
    Total committed heap usage (bytes)=117927936 
File Input Format Counters 
    Bytes Read=272 
Exception in thread "main" java.io.IOException: Job failed! 
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836) 
at mapreduceprogram.main(mapreduceprog.java:68) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:483) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:212) 

我不知道是哪裏的錯誤是發生 對此任何幫助嗎?

我的主要方法內容:

public static void main(String[] args) throws Exception { 
    JobConf conf = new JobConf(mapreduceprog.class); 
    conf.setJobName("mapreduceprog"); 

    conf.setOutputKeyClass(Text.class); 
    conf.setOutputValueClass(IntWritable.class); 

     conf.setMapOutputKeyClass(Text.class); 
     conf.setMapOutputValueClass(Text.class); 

    conf.setMapperClass(Map.class); 
    conf.setCombinerClass(Reduce.class); 
    conf.setReducerClass(Reduce.class); 

    conf.setInputFormat(TextInputFormat.class); 
    conf.setOutputFormat(TextOutputFormat.class); 

    FileInputFormat.setInputPaths(conf, new Path(args[0])); 
    FileOutputFormat.setOutputPath(conf, new Path(args[1])); 

    JobClient.runJob(conf); 

我的68號線

JobClient.runJob(conf); 
+0

您正在使用較舊的mapreduce api – Rakshith 2014-10-17 10:52:25

回答

0

您正在使用舊的API。我建議你使用新的Api。代碼看起來是這樣的

import java.io.File; 
import java.io.IOException; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapreduce.Job; 
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 



public class MyDriver { 
    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException { 
     if(args.length!=2){ 
      System.out.println("Error"); 

      System.exit(-1); 
     } 
     Job job=new Job(); 
     job.setJarByClass(MyDriver.class); 
     job.setMapperClass(Map.class); 
     job.setReducerClass(Reduce.class); 
     job.setMapOutputKeyClass(Text.class); 
     job.setMapOutputValueClass(Text.class); 
     job.setOutputKeyClass(Text.class); /*Reducer Output Key and value class*/ 
     job.setOutputValueClass(NullWritable.class); 
     job.setInputFormatClass(CustomInputFormat.class); 
     FileInputFormat.setInputPaths(job, new Path(args[0])); 
     FileOutputFormat.setOutputPath(job, new Path(args[2])); 
     boolean success=job.waitForCompletion(true); 
     System.exit(success?0:-1); 
    } 

} 
+0

我試過較新的一個,但結果相同 – numanumu 2014-10-17 13:50:19

相關問題