當我鍵入hadoop fs -ls
,我收到以下錯誤消息:Hadoop命令,hadoop fs -ls正在重試連接到服務器錯誤?
[email protected]:~$ hadoop fs -ls
14/03/19 12:18:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/03/19 12:18:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
的Hadoop名稱節點設置格式的輸出爲
[email protected]:~/programs/hadoop-1.2.0/bin$ hadoop namenode -format
14/03/19 14:11:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = deepak/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.7.0_51
************************************************************/
14/03/19 14:11:22 INFO util.GSet: Computing capacity for map BlocksMap
14/03/19 14:11:22 INFO util.GSet: VM type = 32-bit
14/03/19 14:11:22 INFO util.GSet: 2.0% max memory = 932184064
14/03/19 14:11:22 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/03/19 14:11:22 INFO util.GSet: recommended=4194304, actual=4194304
14/03/19 14:11:23 INFO namenode.FSNamesystem: fsOwner=deepak
14/03/19 14:11:23 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/19 14:11:23 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/19 14:11:23 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/19 14:11:23 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/19 14:11:23 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/03/19 14:11:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/03/19 14:11:23 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/03/19 14:11:24 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits
14/03/19 14:11:24 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits
14/03/19 14:11:24 INFO common.Storage: Storage directory /tmp/hadoop-deepak/dfs/name has been successfully formatted.
14/03/19 14:11:24 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at deepak/127.0.1.1
************************************************************/
HDFS數據的默認目錄是'/ tmp/hadoop- $ username'文件夾,在系統重新啓動時會消失。所以你很多人需要在重啓時執行'-format'。您可以通過將HDFS默認目錄設置爲'core-site.xml'中的其他位置來避免這種情況。 – tuxdna