我提以下一個簡單的MAPR程序的驅動程序代碼輸出目錄中JobConf
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
@SuppressWarnings("deprecation")
public class CsvParserDriver {
@SuppressWarnings("deprecation")
public static void main(String[] args) throws Exception
{
if(args.length != 2)
{
System.out.println("usage: [input] [output]");
System.exit(-1);
}
JobConf conf = new JobConf(CsvParserDriver.class);
Job job = new Job(conf);
conf.setJobName("CsvParserDriver");
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
conf.setMapperClass(CsvParserMapper.class);
conf.setMapOutputKeyClass(IntWritable.class);
conf.setMapOutputValueClass(Text.class);
conf.setReducerClass(CsvParserReducer.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);
conf.set("splitNode","NUM_AE");
JobClient.runJob(conf);
}
}
我使用下面的命令運行我的代碼沒有設置
hadoop jar CsvParser.jar CsvParserDriver /user/sritamd/TestData /user/sritamd/output
(所有相應的罐子和在上面的命令創建目錄)
我得到錯誤作爲
Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.
假設我想使用自定義記錄寫入器寫入其他數據庫(不是mysql,因爲記錄寫入器已經在hadoop中)那麼應該怎樣配置才能刪除此異常? – iec2011007