2011-12-14 97 views
7
[email protected]:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/input 
/user/haduser/input 

11/12/14 14:21:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s). 

11/12/14 14:21:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s). 

11/12/14 14:21:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s). 

11/12/14 14:21:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s). 

11/12/14 14:21:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s). 

11/12/14 14:21:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s). 

11/12/14 14:21:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s). 

11/12/14 14:21:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. -Already tried 7 time(s). 

11/12/14 14:21:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s). 

11/12/14 14:21:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s). 

Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused 

我收到的時候我試圖複製文件從/tmp/input/user/haduser/input即使該文件/etc/hosts包含localhost條目上述錯誤。 運行jps command時,未列出TaskTrackernamenode錯誤,同時運行的Hadoop

可能是什麼問題?請有人幫我這個。

回答

1

NameNode(NN)維護HDFS的名稱空間,並且它應該在HDFS上運行用於文件系統操作。檢查日誌爲什麼NN尚未啓動。對於HDFS上的操作,不需要TaskTracker,只有NN和DN就足夠了。查看http://goo.gl/8ogSkhttp://goo.gl/NIWoK教程,瞭解如何在單節點和多節點上設置Hadoop。

+0

我按照相同的鏈接進行安裝。但仍然收到錯誤。 – 2011-12-15 15:45:12

1

bin中的所有文件都是exectuables。只需複製命令並將其粘貼到終端中即可。確保地址是正確的,即用戶必須被替換。這會做到這一點。

9

我有類似的問題 - 實際上Hadoop綁定到IPv6。 Then I Added - 「export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true」to $HADOOP_HOME/conf/hadoop-env.sh

即使我在系統上禁用了IPv6,Hadoop仍然與IPv6綁定。 一旦我將它添加到env,開始工作正常。

希望這可以幫助別人。

3

嘗試做ssh來使用IP本地系統,在這種情況下:

$ SSH 127.0.0.1

一旦你能夠成功地做到了SSH。運行下面的命令來了解打開的端口

列表

〜$ lsof的-i

找監聽連接器,名稱:本地主機:< PORTNAME>(LISTEN)

拷貝< PORTNAME>和替換hadoop conf文件夾中core-site.xml中fs.default.name屬性標記的端口號的現有值

保存core-site.xml,這應該解決問題。