我有由Eclipe我windows
到hadoop
Pseudo cluster
上Ubuntu VM
運行同一臺機器上運行MapReduce的執行程序問題。問題在Eclipse
我正在運行hadoop 1.0.3
。我嘗試了eclipse Juno
以及Ganymede
與eclipse
plugin 1.0.3
,但沒有成功。每當將我的Linux VM
添加爲地圖精簡位置時,我都會收到連接錯誤。
我也試過直接通過下面的配置來設置屬性,沒有更多的運氣。我運行這個作爲java application
。
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://192.168.65.132:8020/");
conf.set("mapred.job.tracker", "192.168.65.132:8021");
conf.set("hadoop.job.ugi", "hadoop");
我收到以下錯誤消息。
13/01/08 09:59:44 INFO ipc.Client: Retrying connect to server: 192.168.65.132/192.168.65.132:8020. Already tried 7 time(s).
13/01/08 09:59:46 INFO ipc.Client: Retrying connect to server: 192.168.65.132/192.168.65.132:8020. Already tried 8 time(s).
13/01/08 09:59:48 INFO ipc.Client: Retrying connect to server: 192.168.65.132/192.168.65.132:8020. Already tried 9 time(s).
Exception in thread "main" java.net.ConnectException: Call to 192.168.65.132/192.168.65.132:8020 failed on connection exception: java.net.ConnectException: Connection refused: no further information
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:238)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:372)
at inverika.training.examples.WordCount.main(WordCount.java:37)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
at org.apache.hadoop.ipc.Client.call(Client.java:1050)
... 17 more
有沒有人試過類似的設置?請分享你的經驗。
的Netstat轉儲:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 0 127.0.0.1:8020 :::* LISTEN
tcp6 0 0 127.0.0.1:8021 :::* LISTEN
tcp6 0 0 127.0.0.1:58399 127.0.0.1:8020 ESTABLISHED
tcp6 0 0 127.0.0.1:8020 127.0.0.1:58399 ESTABLISHED
tcp6 0 0 127.0.0.1:45042 127.0.0.1:8021 ESTABLISHED
tcp6 0 0 127.0.0.1:8021 127.0.0.1:45042 ESTABLISHED
你能在終端執行netstat的'和-atn'符合您的JT和NN是在(8020和8021)指定的端口 –
嗨克里斯運行,這裏是輸出從命令 原始Recv-Q發送-Q本地地址外部地址狀態 tcp6 0 0 127.0.0.1:8020 ::: * LISTEN tcp6 0 0 127.0.0.1:8021 ::: * LISTEN tcp6 0 0 127.0。 0.1:58399 127.0.0.1:8020 ESTABLISHED tcp6 0 0 127.0.0.1:8020 127.0.0.1:58399 ESTABLISHED tcp6 0 0 127.0.0.1:45042 127.0.0.1:8021 ESTABLISHED tcp6 0 0 127.0.0.1:8021 12 7.0.0.1:45042 ESTABLISHED – Rags