2013-05-06 42 views
2

這發生在僞分佈式和分佈式模式下。 當我嘗試啓動HBase時,最初所有3個服務 - 主服務器,區域和quorumpeer都啓動了。然而在一分鐘之內,主人停下來。在日誌中,這是跟蹤 -HBase主站因「連接拒絕」錯誤而停止

2013-05-06 20:10:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 0 time(s). 
2013-05-06 20:10:26,528 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 1 time(s). 
2013-05-06 20:10:27,530 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 2 time(s). 
2013-05-06 20:10:28,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 3 time(s). 
2013-05-06 20:10:29,535 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 4 time(s). 
2013-05-06 20:10:30,538 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 5 time(s). 
2013-05-06 20:10:31,540 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 6 time(s). 
2013-05-06 20:10:32,543 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 7 time(s). 
2013-05-06 20:10:33,544 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 8 time(s). 
2013-05-06 20:10:34,547 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 9 time(s). 
2013-05-06 20:10:34,550 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. 
java.net.ConnectException: Call to <master/master_ip>:9000 failed on connection exception: java.net.ConnectException: Connection refused 
     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1179) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1155) 
     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) 
     at $Proxy9.getProtocolVersion(Unknown Source) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) 
     at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:132) 
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:259) 
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:220) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) 
     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1611) 
     at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:68) 
     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1645) 
     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1627) 
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) 
     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:183) 
     at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363) 
     at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:86) 
     at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:368) 
     at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:301) 
Caused by: java.net.ConnectException: Connection refused 
     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592) 
     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484) 
     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468) 
     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575) 
     at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212) 
     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1121) 
     ... 18 more 

我已經採取措施來解決這個問題沒有任何成功 步驟 - 從分散模式降級到僞分佈式模式。同樣的問題。 - 嘗試獨立模式 - 沒有運氣 - 使用相同的用戶(hadoop)爲hadoop和hbase。爲hadoop設置無密碼的ssh。 - 同樣的問題。 - 編輯/ etc/hosts文件並將localhost/servername和127.0.0.1更改爲引用SO和不同源的實際IP地址。還是同樣的問題。 - 重新啓動服務器

這裏是conf文件。

HBase的-site.xml中

<configuration> 
<property> 
    <name>hbase.rootdir</name> 
    <value>hdfs://<master>:9000/hbase</value> 
     <description>The directory shared by regionservers.</description> 
</property> 

<property> 
     <name>hbase.cluster.distributed</name> 
     <value>true</value> 
</property> 

<property> 
     <name>hbase.zookeeper.quorum</name> 
     <value><master></value> 
</property> 

<property> 
     <name>hbase.master</name> 
     <value><master>:60000</value> 
     <description>The host and port that the HBase master runs at.</description> 
</property> 

<property> 
     <name>dfs.replication</name> 
     <value>1</value> 
     <description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.</description> 
</property> 

</configuration> 

/etc/hosts文件

127.0.0.1的localhost.localdomain本地主機 :: 1 localhost6.localdomain6 localhost6 。

我在這裏做錯了什麼?

的Hadoop版本 - Hadoop的0.20.2-cdh3u5 HBase的版本 - 版本0.90.6-cdh3u5

回答

5

通過看你的配置文件,我假設你正在使用的實際主機名在您的配置文件。如果是這種情況,請將主機名和機器的IP一起添加到/ etc/hosts文件中。還要確保它與Hadoop的core-site.xml中的主機名匹配。正確的名稱解析對於正確的HBase功能至關重要。

如果仍然遇到任何問題,請按照正確提到here的步驟。我試圖詳細解釋這個過程,希望如果你仔細地執行所有步驟,你就可以運行它。

HTH

+0

謝謝Tariq。我認爲在我的情況下,有一個問題 - 我在一種情況下使用FQDN(Hive),而在另一種情況下(Hadoop)只使用短主機名。我使服務器名稱在任何地方都一致。我將所有內容都改爲短主機名。我也檢查了主機文件,但那看起來很好.. – Sumod 2013-05-08 13:57:25

+0

歡迎您來Sumod。我看到..實際上正確的名稱解析對於正確的羣集功能至關重要..現在它正在工作嗎? – Tariq 2013-05-08 14:00:50

-1

我相信你正在嘗試使用僞分佈式模式。我得到了同樣的錯誤,直到我固定的3件事情:

  1. 本地/ etc/hosts文件的

$貓/ etc/hosts中

127.0.0.1 localhost 
255.255.255.255 broadcasthost 
::1    localhost 
fe80::1%lo0 localhost 
172.20.x.x my.hostname.com 
  1. 不是指向主機名,指向在hbase-env.sh

  2. 糾正我的類路徑本地主機A.確保Hadoop在classpath中(通過hbase-env。SH)

    出口JAVA_HOME =你的java的家 出口HADOOP_HOME路徑=路徑,以Hadoop的家 出口HBASE_HOME =路徑,以HBase的家

    出口HBASE_CLASSPATH =路徑,以HBase的家/ conf目錄:你的路徑Hadoop的家用/ conf目錄

B.運行我的程序的時候,我從編輯HBase的以下bash腳本:權威指南(BIN/run.sh) $的grep -v#斌/ run.sh

bin=`dirname "$0"` 
bin=`cd "$bin">/dev/null; pwd` 

    echo "usage: $(basename $0) <example-name>" 
    exit 1; 
fi 

MVN="mvn" 
if [ "$MAVEN_HOME" != "" ]; then 
    MVN=${MAVEN_HOME}/bin/mvn 
fi 

CLASSPATH="${HBASE_CONF_DIR}" 

if [ -d "${bin}/../target/classes" ]; then 
    CLASSPATH=${CLASSPATH}:${bin}/../target/classes 
fi 

cpfile="${bin}/../target/cached_classpath.txt" 
if [ ! -f "${cpfile}" ]; then 
    ${MVN} -f "${bin}/../pom.xml" dependency:build-classpath -Dmdep.outputFile="${cpfile}" &> /dev/null 
fi 
CLASSPATH=`hbase classpath`:${CLASSPATH}:`cat "${cpfile}"` 

JAVA_HOME=your path to java home 
JAVA=$JAVA_HOME/bin/java 
JAVA_HEAP_MAX=-Xmx512m 

echo "Classpath is $CLASSPATH" 
"$JAVA" $JAVA_HEAP_MAX -classpath "$CLASSPATH" "[email protected]" 

值得一提的是我正在使用Mac。我相信這些說明也適用於Linux。