我正在運行一個簡單的mapreduce程序wordcount agian Apache Hadoop 2.6.0。 hadoop分佈式運行(幾個節點)。但是,我無法從紗線工作歷史中看到任何stderr和stdout。 (但我可以看到系統日誌)yarn stderr no logger appender and no stdout
wordcount程序非常簡單,只是爲了演示目的。在映射類的地圖功能
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static final Log LOG = LogFactory.getLog(WordCount.class);
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
LOG.info("LOG - map function invoked");
System.out.println("stdout - map function invoded");
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapreduce.job.jar","/space/tmp/jar/wordCount.jar");
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("hdfs://localhost:9000/user/jsun/input"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:9000/user/jsun/output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
注意,我添加了兩個聲明:
LOG.info("LOG - map function invoked");
System.out.println("stdout - map function invoded");
這兩句話是測試我是否可以看到Hadoop的服務器日誌記錄。我可以成功運行該程序。但是,如果我去爲localhost:8088看到的應用歷史,然後在「日誌」,我看沒有什麼「標準輸出」,並在「標準錯誤」:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
我認爲是讓那些需要一些配置輸出,但不確定缺少哪條信息。我在網上搜索以及在stackoverflow。有些人提到了container-log4j.properties,但他們沒有具體說明如何配置該文件以及放置位置。
有一點需要注意的是,我也嘗試過使用Hortonworks Data Platform 2.2和Cloudera 5.4的工作。結果是一樣的。我記得當我處理一些以前的hadoop版本(hadoop 1.x)時,我可以很容易地看到來自同一地點的日誌。所以我想這是在Hadoop中2.X
=======
新的東西作爲對比,如果我在本地模式下(意味着LocalJobRunner)在Apache Hadoop的運行,我可以看到一些測井公司在控制檯這樣的:
[2015-09-08 15:57:25,992]org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:998) INFO:kvstart = 26214396; length = 6553600
[2015-09-08 15:57:25,996]org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402) INFO:Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
[2015-09-08 15:57:26,064]WordCount$TokenizerMapper.map(WordCount.java:28) INFO:LOG - map function invoked
stdout - map function invoded
[2015-09-08 15:57:26,075]org.apache.hadoop.mapred.LocalJobRunner$Job.statusUpdate(LocalJobRunner.java:591) INFO:
[2015-09-08 15:57:26,077]org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1457) INFO:Starting flush of map output
[2015-09-08 15:57:26,077]org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1475) INFO:Spilling map output
這些類型的測井公司的(「映射函數被調用」)是我所期待的在Hadoop中服務器日誌記錄。
這個問題已經清楚地說明了紗線作業歷史中'標準輸出'中沒有預期的東西。而我並沒有試圖從'控制檯'中找到輸出。而且,既然你在說'job tracker',我想你的意思是mapreduce v1。但問題在於討論紗線。 – mattsun