2013-07-03 71 views
3

雖然移動使用從以下命令臨時目錄 在Hadoop的一個文件: -Hadoop的錯誤DFS -copyFromLocal

[[email protected] hadoop]$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /home/Divya/gutenberg 

如何解決此錯誤: - NameNode的過程

13/07/03 14:42:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
13/07/03 14:42:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
copyFromLocal: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused 

回答

2

檢查是否正在運行在您的本地主機上和此端口上54310

+1

u能請說如何檢查做名節點運行。 ?你是否希望我停止並檢查通知,或者有什麼方法可以檢查它是否停止服務 – Surya

+1

沒有名稱節點沒有在我的本地運行,可以幫助我做到這一點。下面是詳細信息** [迪夫亞@本地的hadoop] $ /home/Divya/hadoop/bin/stop-all.sh 停止JobTracker的 本地主機:停止的TaskTracker 沒有NameNode會停止 本地主機:停止數據節點 本地主機:停止secondarynamenode [Divya @ localhost hadoop] $ ** – Surya

+2

@ user2499617請給我看看你的NameNode日誌。看起來像你的配置有問題。 – Tariq

1

檢查hadoop守護進程服務是否正常運行。以root身份使用JPS命令檢查它們是否正常運行。確保集羣已正確設置。

+0

jps沒有安裝在我的機器上,在它上面工作,一旦完成就會讓你知道,有沒有其他方法可以查看日誌而不是JPS命令 – Surya

+1

如果它對將來的任何人都有幫助,jps can在這裏找到: $ JAVA_HOME/bin/jps 這個問題更詳細地討論它:http://stackoverflow.com/questions/11738070/hadoop-cannot-use-jps-command – Aditya

+0

jps在我的Ubuntu工作,但不在fedora –