2014-10-31 27 views
3

我有兩臺機器的Hadoop 2.5集羣,在slave上,數據節點出現故障,出現UnregisteredNodeException。這裏是主配置:UnregisteredNodeException導致slave上的dataNode無法啓動

master$ jps 
5036 Jps 
7145 DataNode 
918 ResourceManager 
7338 SecondaryNameNode 
6986 NameNode 
1105 NodeManager 

對於從

slave$ jps 
15950 Jps 
26650 NodeManager 

這裏從hadoop-hadoop-datanode-slave.log完整的堆棧跟蹤:

2014-10-23 19:43:46,895 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-8947225-127.0.1.1-1409591980216 (Datanode Uuid 5c9f00ab-1d75-4706-8ed8-bfb449174c9a) service to hadoop-server/192.168.2.72:8020 is shutting down 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.UnregisteredNodeException): Data node DatanodeRegistration(192.168.2.73, datanodeUuid=5c9f00ab-1d75-4706-8ed8-bfb449174c9a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-ab378c59-62ed-44ff-8814-03b5b733b6fa;nsid=1290295317;c=0) is attempting to report storage ID 5c9f00ab-1d75-4706-8ed8-bfb449174c9a. Node 192.168.2.72:50010 is expected to serve this storage. 
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanode(DatanodeManager.java:475) 
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1702) 
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1049) 
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152) 
    at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28061) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 

    at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy11.blockReport(Unknown Source) 
    at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy11.blockReport(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:214) 
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:476) 
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:699) 
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:834) 
    at java.lang.Thread.run(Thread.java:745) 

當我在任何兩臺機器發出hdfs dfsadmin -report我看到以下內容:

14/10/31 10:48:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
Configured Capacity: 12547440640 (11.69 GB) 
Present Capacity: 4710391808 (4.39 GB) 
DFS Remaining: 4504489984 (4.20 GB) 
DFS Used: 205901824 (196.36 MB) 
DFS Used%: 4.37% 
Under replicated blocks: 4 
Blocks with corrupt replicas: 0 
Missing blocks: 0 

------------------------------------------------- 
Live datanodes (1): 

Name: 192.168.2.72:50010 (hadoop-server) 
Hostname: hadoop-server 
Decommission Status : Normal 
Configured Capacity: 12547440640 (11.69 GB) 
DFS Used: 205901824 (196.36 MB) 
Non DFS Used: 7837048832 (7.30 GB) 
DFS Remaining: 4504489984 (4.20 GB) 
DFS Used%: 1.64% 
DFS Remaining%: 35.90% 
Configured Cache Capacity: 0 (0 B) 
Cache Used: 0 (0 B) 
Cache Remaining: 0 (0 B) 
Cache Used%: 100.00% 
Cache Remaining%: 0.00% 
Xceivers: 1 
Last contact: Fri Oct 31 10:48:35 CET 2014 

此外,我可以從資源管理器Web UI中看到/監控從屬機器,那麼是什麼原因導致了這種故障以及我如何修復它?

+2

你可以不用密碼從主機到從機進行SSH連接嗎?你是否嘗試過(重新)從主機格式化namenode?從站配置是否正確連接到主站? HDFS是否適用於主人?對奴隸怎麼樣? – 2014-10-31 09:55:40

+0

感謝您的提示,我可以從主從SSH無密碼,反之亦然。 HDFS可以在兩臺機器上運行,例如當我發行'hdfs dfs -ls'時,我得到了相同的結果。沒有嘗試(重新)格式化namenode。 – bachr 2014-10-31 10:02:32

+0

我不明白HDFS如何在從服務器上工作 - 沒有datanode在運行。如果你從主人插入一個文件,我猜奴隸沒有看到它?嘗試重新格式化namenode。 – 2014-10-31 11:37:39

回答

7

我實際上已經複製了一個虛擬機創建從機,導致所有節點具有相同UUID的情況。所以(如討論here)我關機在從服務:

hadoop-daemon.sh stop datanode 
yarn-daemon.sh stop nodemanager 

然後,我(分別dfs.datanode.data.dirdfs.namenode.data.dir)刪除數據管理部和NameNode的目錄。然後,重新啓動datanode和namenode。現在我可以看到datanode啓動並運行了:

$ jps 
17135 NodeManager 
17290 DataNode 
18221 Jps 
相關問題