2012-10-26 38 views
5

我正在按照教程來安裝hadoop:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ 現在我被困在「複製本地示例數據到HDFS」步驟。handoop連接錯誤與put/copyFromLocal

連接錯誤,我得到:

<12/10/26 17:29:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s). 
12/10/26 17:29:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s). 
12/10/26 17:29:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s). 
12/10/26 17:29:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s). 
12/10/26 17:29:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s). 
12/10/26 17:29:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s). 
12/10/26 17:29:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s). 
12/10/26 17:29:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s). 
12/10/26 17:29:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s). 
12/10/26 17:29:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s). 
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused 

這幾乎是相同的這個問題已經: Errors while running hadoop

現在的問題是,我已經禁用了ivp6,因爲那裏所描述的點上面的教程,但它沒有幫助。有沒有我失蹤的東西? 感謝您的幫助!

編輯:

我重複教程馮的第二機器用新安裝的unbuntu並比較它一步一步。事實證明,hduser的bashrc配置中存在一些錯誤。之後,它工作的很好...

回答

4

如果我試圖做的時候我得到了確切的錯誤信息Hadoop fs <anything>當DataNode/NameNode沒有運行,所以我猜想同樣的情況正在發生。

類型jps在您的終端。如果一切正常運行,它應該看起來像:

16022 DataNode 
16524 Jps 
15434 TaskTracker 
15223 JobTracker 
15810 NameNode 
16229 SecondaryNameNode 

我會打賭你是DataNode或NameNode未運行。如果jps打印輸出中缺少任何內容,請重新啓動。

+0

節點已經運行,使WASN這個問題。但是我現在發現了這個麻煩,感謝無論如何幫助! – kumoyadori

+0

問題是什麼? – parisa

0

此命令

start-all.sh

這將解決您的問題,整個配置後給這個命令

的Hadoop的NameNode -formate

,並啓動所有服務

0
  1. 轉到您的etc/hadoop/core-site.xml。檢查fs.default.name的值 它應該如下所示。 { fs.default.name HDFS://本地主機:54310 }
  2. 整體結構後,得到該命令

hadoop的名稱節點-format

  • 該命令啓動所有服務
  • start-all.sh

    這將解決您的問題。

    你的NameNode可能是在安全模式下運行斌/ HDFS dfsadmin -safemode離開或斌/ Hadoop的dsfadmin -safemode離開 然後按照步驟 - 2和步驟-3