0
我寫了一個自定義文件讀取器,因爲它們是大的gzip文件,因此我不想分割我的輸入文件,我希望我的第一個mapper工作只需簡單地對它們進行gunzip。我遵循'Hadoop權威指南'中的示例,但在嘗試讀入BytesWritable時出現堆錯誤。我相信這是因爲字節數組的大小是85713669,但我不知道如何解決這個問題。使用自定義RecordReader與大文件時出現堆錯誤
下面是代碼:
public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable> {
private FileSplit fileSplit;
private Configuration conf;
private BytesWritable value = new BytesWritable();
private boolean processed = false;
@Override
public void close() throws IOException {
// do nothing
}
@Override
public NullWritable getCurrentKey() throws IOException,
InterruptedException {
return NullWritable.get();
}
@Override
public BytesWritable getCurrentValue() throws IOException,
InterruptedException {
return value;
}
@Override
public float getProgress() throws IOException, InterruptedException {
return processed ? 1.0f : 0.0f;
}
@Override
public void initialize(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
this.fileSplit = (FileSplit) split;
this.conf = context.getConfiguration();
}
@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int) fileSplit.getLength()];
Path file = fileSplit.getPath();
FileSystem fs = file.getFileSystem(conf);
FSDataInputStream in = null;
try {
in = fs.open(file);
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}
}
我不知道Hadoop爲您處理了GZIP,現在幾天來一直困擾着這個問題。謝謝你清理它 – Shane 2013-02-11 13:55:22