2017-04-13 38 views
0

當我將「sqoop import ...」配置爲配置單元時,出現此錯誤。只能複製到0節點而不是minReplication(= 1)。在此操作中有2個datanode正在運行並且不包含任何節點

namenode log 
java.io.IOException: File /input/xxxx/_temporary/1/_temporary/attempt_1492073551248_0012_m_000002_1/part-m-00002 could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation. 
datanode logs 
slave1 :2017-04-13 19:58:59,444 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.149.141:50010, dest: /192.168.149.141:42764, bytes: 451, op: HDFS_READ, cliID: DFSClient_attempt_1492073551248_0012_m_000001_2_785964301_1, offset: 0, srvID: f274418e-04b6-4109-9521-e3c384c21ad0, blockid: BP-219683118-192.168.149.138-1491539013447:blk_1073742751_1927, duration: 160511 

datanode logs 
slave2: 2017-04-13 19:58:02,389 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.149.141:34576, dest: /192.168.149.142:50010, bytes: 127362723, op: HDFS_WRITE, cliID: DFSClient_attempt_1492073551248_0012_m_000000_0_-417808976_1, offset: 0, srvID: 7f9110ab-8a1d-4a32-8219-aff6e3cd29b2, blockid: BP-219683118-192.168.149.138-1491539013447:blk_1073742761_1937, duration: 64254909353 
2017-04-13 19:58:02,389 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-219683118-192.168.149.138-1491539013447:blk_1073742761_1937, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 
2017-04-13 19:58:11,269 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.149.141:34588, dest: /192.168.149.142:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_attempt_1492073551248_0012_m_000002_1_-2031862368_1, offset: 0, srvID: 7f9110ab-8a1d-4a32-8219-aff6e3cd29b2, blockid: BP-219683118-192.168.149.138-1491539013447:blk_1073742762_1938, duration: 63824306914 
2017-04-13 19:58:11,270 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-219683118-192.168.149.138-1491539013447:blk_1073742762_1938, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 
2017-04-13 19:58:15,441 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:349ms (threshold=300ms) 
2017-04-13 19:58:15,769 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:328ms (threshold=300ms) 
2017-04-13 19:58:28,675 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.149.142:51700, dest: /192.168.149.142:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_attempt_1492073551248_0012_m_000003_1_-395038848_1, offset: 0, srvID: 7f9110ab-8a1d-4a32-8219-aff6e3cd29b2, blockid: BP-219683118-192.168.149.138-1491539013447:blk_1073742763_1939, duration: 52247885321 
2017-04-13 19:58:28,675 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-219683118-192.168.149.138-1491539013447:blk_1073742763_1939, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 
2017-04-13 19:58:28,689 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-219683118-192.168.149.138-1491539013447:blk_1073742764_1940 src: /192.168.149.142:51718 dest: /192.168.149.142:50010 

任何想法來解決這個問題?謝謝!

+0

datanodes上有足夠的空間嗎? – franklinsijo

+0

其中一個數據節點DFS使用了10.45%,另一個則是4.12%另外,這兩個數據節點正常工作。 –

+0

你可以分享datanode日誌嗎? –

回答

0

檢查https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo, 我在短時間內執行大量查詢時遇到了同樣的錯誤,因此增加了datanode上的線程,並通過設置「dfs.datanode.handler.count」來解決問題。由於多種原因引發此異常,請訪問鏈接以查看您的情況。

相關問題