2013-04-23 197 views
2

我已經建立了一個小型的Hadoop集羣與3個機SecondaryNameNode UI狀態頁:無法打開網頁爲

  1. 機(Hadoop1)同時運行NameNode和JobTracker的
  2. 機(Hadoop2)正在運行SecondaryNameNode
  3. 機(Hadoop3)正在運行的DataNode和TaskTracker共同

寄託都工作得很好,當我檢查的日誌文件。 然而,當我試圖通過鍵入localhost來檢查SecondaryNameNode的工作狀態:機器Hadoop2上50090,它表明:

Unable to connect ....can't establish a connection to the server at localhost:50090. 

任何人都遇到過這樣的問題嗎?

內容上SNN的HDFS-site.xml中:

<configuration> 
<property> 
<name>dfs.replication</name> 
<value>2</value> 
</property> 

<property> 
<name>dfs.http.address</name> 
<value>Hadoop1:50070</value> 
</property> 

<property> 
<name>dfs.secondary.http.address</name> 
<value>Hadoop2:50090</value> 
</property> 
</configuration> 

下面是SNN運行日誌的一部分:

2013-04-23 19:47:00,820 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2013-04-23 19:47:00,987 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file fsimage size 654 bytes. 
2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file edits size 4 bytes. 
2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: VM type  = 64-bit 
2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB 
2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: capacity  = 2^21 = 2097152 entries 
2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152 
2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop 
2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 
2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 
2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 
2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 
2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 7 
2013-04-23 19:47:01,000 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0 
2013-04-23 19:47:01,000 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/namesecondary/current/edits of size 4 edits # 0 loaded in 0 seconds. 
2013-04-23 19:47:01,001 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2013-04-23 19:47:01,049 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds. 
2013-04-23 19:47:01,334 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds. 
2013-04-23 19:47:01,570 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL Hadoop1:50070putimage=1&port=50090&machine=Hadoop3&token=-32:145975115:0:1366717621000:1366714020860 
2013-04-23 19:47:01,771 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 654 

回答

0

先給值dfs.secondary。 http.address在您的SNN的hdfs-site.xml中。另外,我假設你的機器之間沒有啓用防火牆嗎?如果你可以顯示你的日誌,這將有所幫助,我曾經看到有時用戶輸入SNN的錯誤端口號,它們的日誌有所不同,因此會導致連接錯誤。

+0

dfs.secondary.http.address已經在SNN的文件hdfs-site.xml中設置好了,我剛剛發佈了文件hdfs-site.xml的內容,如上所示。我還發布了SNN的日誌,顯示它運行良好,檢查點已成功完成。 – behe 2013-04-23 12:19:46