2017-03-28 81 views
1

我對hadoop相當陌生,而且我已經迴應了我之前的這個問題,但評論部分太短,無法顯示我的日誌文件。 This is my previous question。任何人都可以幫助我找到這個日誌文件中的錯誤?我會高度讚賞它。謝謝。datanode Hadoop 2.7.3單節點錯誤(僞分佈式模式)

STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z 
STARTUP_MSG: java = 1.8.0_121 
************************************************************/ 
2017-03-27 16:14:50,262 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 
2017-03-27 16:14:51,049 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2017-03-27 16:14:51,131 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2017-03-27 16:14:51,133 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2017-03-27 16:14:51,134 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 
2017-03-27 16:14:51,139 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576 
2017-03-27 16:14:51,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master.hadoop.lan 
2017-03-27 16:14:51,151 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0 
2017-03-27 16:14:51,179 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Shutdown complete. 
2017-03-27 16:14:51,180 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain 
java.net.BindException: Problem binding to [0.0.0.0:50010] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) 
    at org.apache.hadoop.ipc.Server.bind(Server.java:425) 
    at org.apache.hadoop.ipc.Server.bind(Server.java:397) 
    at org.apache.hadoop.hdfs.net.TcpPeerServer.<init>(TcpPeerServer.java:113) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:897) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1111) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509) 
Caused by: java.net.BindException: Address already in use 
    at sun.nio.ch.Net.bind0(Native Method) 
    at sun.nio.ch.Net.bind(Net.java:433) 
    at sun.nio.ch.Net.bind(Net.java:425) 
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 
    at org.apache.hadoop.ipc.Server.bind(Server.java:408) 
    ... 10 more 
    2017-03-27 16:14:51,184 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 
    2017-03-27 16:14:51,186 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************ 
+1

感謝@franklinsijo爲您的時間,它的工作原理! –

回答

1

從錯誤日誌,它看起來像一個進程已在使用的端口50010

java.net.BindException:問題結合[0.0.0.0:50010] java.net.BindException:已在使用地址

在大多數情況下它是datanode過程本身是沒有正確終止。

使用的端口

netstat -ntpl | grep 50010 

tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN <pid>/java 

也驗證了這過程獲取過程的pid正在使用的端口

ps -ef | grep <pid> 

kill過程

kill -9 <pid> 

現在的端口是免費的,請嘗試重新啓動羣集。

如果進程不能被殺死,加入此屬性更改數據節點的端口hdfs-site.xml

<property> 
    <name>dfs.datanode.address</name> 
    <value>hostname:different_port</value> 
</property> 

參考Hadoop BindExceptionERROR的所有可能的原因。

相關問題