2014-03-19 49 views
1

當我鍵入hadoop fs -ls,我收到以下錯誤消息:Hadoop命令,hadoop fs -ls正在重試連接到服務器錯誤?

[email protected]:~$ hadoop fs -ls 
14/03/19 12:18:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
14/03/19 12:18:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 

的Hadoop名稱節點設置格式的輸出爲

[email protected]:~/programs/hadoop-1.2.0/bin$ hadoop namenode -format 
14/03/19 14:11:22 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = deepak/127.0.1.1 
STARTUP_MSG: args = [-format] 
STARTUP_MSG: version = 1.2.0 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013 
STARTUP_MSG: java = 1.7.0_51 
************************************************************/ 
14/03/19 14:11:22 INFO util.GSet: Computing capacity for map BlocksMap 
14/03/19 14:11:22 INFO util.GSet: VM type  = 32-bit 
14/03/19 14:11:22 INFO util.GSet: 2.0% max memory = 932184064 
14/03/19 14:11:22 INFO util.GSet: capacity  = 2^22 = 4194304 entries 
14/03/19 14:11:22 INFO util.GSet: recommended=4194304, actual=4194304 
14/03/19 14:11:23 INFO namenode.FSNamesystem: fsOwner=deepak 
14/03/19 14:11:23 INFO namenode.FSNamesystem: supergroup=supergroup 
14/03/19 14:11:23 INFO namenode.FSNamesystem: isPermissionEnabled=true 
14/03/19 14:11:23 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100 
14/03/19 14:11:23 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 
14/03/19 14:11:23 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0 
14/03/19 14:11:23 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/03/19 14:11:23 INFO common.Storage: Image file of size 112 saved in 0 seconds. 
14/03/19 14:11:24 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits 
14/03/19 14:11:24 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits 
14/03/19 14:11:24 INFO common.Storage: Storage directory /tmp/hadoop-deepak/dfs/name has been successfully formatted. 
14/03/19 14:11:24 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at deepak/127.0.1.1 
************************************************************/ 

回答

3

來解決它的最好方法是

檢查Hadoop的守護程序正確使用JPS命令運行。 格式使用名稱節點

斌/ Hadoop的NameNode的-format

更多信息在這裏..

http://www.77-thoughts.com/hadoop-info-ipc-client-retrying-connect-to-server-localhost127-0-0-19000/

此外,您還可以設置在覈心 - 不同的HDFS目錄($ HADOOP_CONF_DIR) SITE.XML

+0

HDFS數據的默認目錄是'/ tmp/hadoop- $ username'文件夾,在系統重新啓動時會消失。所以你很多人需要在重啓時執行'-format'。您可以通過將HDFS默認目錄設置爲'core-site.xml'中的其他位置來避免這種情況。 – tuxdna

2

您可以檢查您的Namenode狀態。將'jps'放入namenode機器並檢查其狀態。這可能是因爲Namenode下降。

+0

迪帕克@迪帕克:〜/程序/ Hadoop的1.2.0 /斌$ JPS 2718的DataNode 3298的TaskTracker 3058的JobTracker 2962 SecondaryNameNode 4560 Jps – DeepakZephyr

+0

我不認爲我的名字是跑7?我現在要做什麼? – DeepakZephyr

+0

檢查namenode的日誌,以查看是否有任何錯誤。它可能位於namenode的'$ HADOOP_HOME/logs /'目錄中。 – vefthym