2017-07-01 29 views
0

我按照此tutorial安裝hbasehadoop但我遇到問題。HBase無法在HDFS中創建其目錄

一切都很好,直到最後一步

HBase的HDFS中創建的目錄。要查看創建的目錄,請使用 瀏覽至Hadoop bin並鍵入以下命令。

$ ./bin/hadoop fs -ls/hbase如果一切順利,它會給你 下面的輸出。

找到7項drwxr-XR-X - HBase的用戶0 2014-06-25 18:58 /hbase/.tmp

...

但是當我運行這個命令我得到/hbase :No such file or directory

這是我的配置

Hadoop配置

芯-site.xml中

<configuration> 
    <property> 
     <name>fs.defaultFS</name> 
     <value>hdfs://localhost:9000</value> 
    </property> 
</configuration> 

HDFS-site.xml中

<configuration> 
    <property> 
     <name>dfs.replication</name > 
     <value>1</value> 
    </property> 

    <property> 
     <name>dfs.name.dir</name> 
     <value>file:///home/marc/hadoopinfra/hdfs/namenode</value> 
    </property> 

    <property> 
     <name>dfs.data.dir</name> 
     <value>file:///home/marc/hadoopinfra/hdfs/datanode</value> 
    </property> 
</configuration> 

mapred-site.xml中

<configuration> 
    <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
    </property> 
</configuration> 

紗線的site.xml

<configuration> 
    <property> 
     <name>yarn.nodemanager.aux-services</name> 
     <value>mapreduce_shuffle</value> 
    </property> 
    <property> 
     <name>yarn.nodemanager.env-whitelist</name> 
     <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> 
    </property> 
</configuration> 

HBase的配置 HBase的-site.xml中

<configuration> 
    <property> 
    <name>hbase.rootdir</name> 
    <value>hdfs://localhost:8030/hbase</value> 
</property> 
    <property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>/home/marc/zookeeper</value> 
    </property> 
    <property> 
     <name>hbase.cluster.distributed</name> 
     <value>true</value> 
    </property> 
</configuration> 

我可以瀏覽http://localhost:50070http://localhost:8088/cluster

我該如何解決呢?

編輯

基於SAURABH網速慢的回答,我創建了HBase的文件夾,但它保持爲空。

在HBase的 - 馬克 - 碩士 - 馬克 - pc.log,我有以下例外。它有關係嗎?

2017-07-01 20:31:59,349 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Failed to become active master 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN] 
    at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) 
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) 
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) 
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) 
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) 
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) 
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) 
    at java.lang.Thread.run(Thread.java:748) 
2017-07-01 20:31:59,351 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN] 
    at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) 
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) 
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) 
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) 
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) 
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) 
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) 
    at java.lang.Thread.run(Thread.java:748) 
+0

您的HDFS似乎在端口9000上運行,而您的HBase站點正試圖連接到端口8030. –

回答

2

日誌表明HBase在成爲活動主服務器時存在問題,因此它開始關閉。

我的假設是,HBase永遠無法正常啓動,因此它並沒有創建自己的/hbase目錄。此外,這將是/hbase目錄仍然爲空的原因。

我在我的虛擬機上再現了您的錯誤,並使用此修改的設置修復了它。


OS的CentOS Linux的發佈1511年7月2日

虛擬化軟件流浪漢,VirtualBox虛擬

的Java

java -version 
openjdk version "1.8.0_131" 
OpenJDK Runtime Environment (build 1.8.0_131-b12) 
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode) 

核心的site.xml(HDFS)

<configuration> 
    <property> 
     <name>fs.default.name</name> 
     <value>hdfs://localhost:8020</value> 
    </property> 
</configuration> 

HBase的-site.xml中(HBase的)

<configuration> 
    <property> 
     <name>hbase.rootdir</name> 
     <value>file:/home/hadoop/HBase/HFiles</value> 
    </property> 

    <property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>/home/hadoop/zookeeper</value> 
    </property> 
    <property> 
     <name>hbase.cluster.distributed</name> 
     <value>true</value> 
    </property> 
    <property> 
     <name>hbase.rootdir</name> 
     <value>hdfs://localhost:8020/hbase</value> 
    </property> 
</configuration> 

目錄的所有者和權限調整

sudo su # Become root user 
cd /usr/local/ 

chown -R hadoop:root hadoop 
chmod -R 755 hadoop 

chown -R hadoop:root Hbase 
chmod -R 755 Hbase 

結果

與此設置HBase的開始後,它會自動創建的/hbase目錄與內容填充它。

[[email protected] conf]$ hdfs dfs -ls /hbase 
Found 7 items 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/.tmp 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/MasterProcWALs 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/WALs 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/data 
-rw-r--r-- 1 hadoop supergroup   42 2017-07-03 14:26 /hbase/hbase.id 
-rw-r--r-- 1 hadoop supergroup   7 2017-07-03 14:26 /hbase/hbase.version 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/oldWALs 
+0

我沒有爲該文件中的hbase.security.authentication設置任何內容。這是正常的嗎? – Marc

+0

當我閱讀日誌時,我認爲hadoop沒有設置爲簡單認證 – Marc

+0

@Marc,我更新了我的答案。希望對你有幫助! –

1

我們只需要在配置文件中編輯那些不能自行創建的東西。所以,你需要在HDFS中手動創建目錄。 hdfs dfs -mkdir /hbase

+0

謝謝,但該文件夾仍然是空的。請參閱我的更新 – Marc