我在HDFS上追加文件時遇到了錯誤(cloudera 2.0.0-cdh4.2.0)。 導致錯誤的用例爲:java.io.IOException:未能添加數據節點。 HDFS(Hadoop)
- 在文件系統(DistributedFileSystem)上創建文件。 確定
追加前面創建的文件。 錯誤
OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);
然後引發錯誤:
Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)
一些相關的HDFS CONFIGS:
dfs.replication
設置爲2
dfs.client.block.write.replace-datanode-on-failure.policy
設置爲true dfs.client.block.write.replace-datanode-on-failure
設置爲默認
任何想法? 謝謝!