2013-02-19 31 views
0

我在啓動數據節點時收到錯誤設置在我的機器上的單節點羣集錯誤org.apache.hadoop.hdfs.server.datanode.DataNode:java.io.IOException:調用localhost/127.0.0.1:54310本地異常失敗

************************************************************/ 
2013-02-18 20:21:32,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting DataNode 
STARTUP_MSG: host = somnath-laptop/127.0.1.1 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 1.0.4 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 
************************************************************/ 
2013-02-18 20:21:32,593 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2013-02-18 20:21:32,618 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2013-02-18 20:21:32,620 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2013-02-18 20:21:32,620 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 
2013-02-18 20:21:33,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2013-02-18 20:21:33,056 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 
2013-02-18 20:21:37,890 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.IOException: Connection reset by peer 
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1075) 
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) 
    at sun.proxy.$Proxy5.getProtocolVersion(Unknown Source) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:370) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:429) 
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:331) 
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:296) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:356) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) 
Caused by: java.io.IOException: Connection reset by peer 
    at sun.nio.ch.FileDispatcher.read0(Native Method) 
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) 
    at sun.nio.ch.IOUtil.read(IOUtil.java:224) 

有關如何解決此錯誤的任何想法?

回答

1

好了解決了問題。

由於我通過網絡代理使用我的單節點集羣,因此我在HADOOP守護程序之間進行通信時,將以下屬性行添加到$ HADOOP_HOME/conf/mapred-site.xml以繞過代理服務器。

但是,這次我嘗試了一個直接的互聯網連接,所以必須註釋掉我在mapred-site.xml中添加的屬性。

下面是從mapred-site.xml中的屬性,我註釋掉:

<!-- 
<property> 
<name>hadoop.rpc.socket.factory.class.default</name> 
<value>org.apache.hadoop.net.StandardSocketFactory</value> 
<final>true</final> 
<description> 
    Prevent proxy settings set up by clients in their job configs from affecting our connectivity. 
</description> 
</property> 
--> 
+0

似乎是我的Hadoop的問題之一,而試圖避免我的hadoop的單節點上的迴環。謝謝。 – tremendows 2013-12-05 13:48:44

相關問題