2013-06-11 82 views
0

我最近開始研究NoSql和大數據,並決定繼續研究它們。我試圖在我的win2008 R2 64位機器上安裝和配置Hadoop和Hbase。但不幸的是,我一直不成功,我在安裝的每個階段都有不同的錯誤。我在這方面遵循下面提到的教程。Hadoop&Hbase安裝和配置窗口上單個節點集羣中的問題

Hadoop的= http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html 爲HBase的= http://ics.upjs.sk/~novotnyr/blog/334/setting-up-hbase-on-windows

首先,當我運行JPS在/ usr /本地/ Hadoop的目錄命令,我沒有看到數據節點出現,這些值僅爲有:

$ JPS
3984 NameNode的
6864 JPS
5972 JobTracker的

然而,當我瀏覽到這個地址12 7.0.0.1:50070,運行良好。但是當我嘗試運行測試wordcount示例Job時,它長時間停留在下面提到的位置,我必須重新啓動cygwin終端:

11/06/13 13:43:01信息mapred.JobClient:正在運行的作業:job_201005081732_0001 13年11月6日13時43分02秒INFO mapred.JobClient:地圖0%減少0%

而且,我只是忽略了它,並移動到安裝和頂部的Hadoop的HBase的配置,安裝很順利,但現在時我在hbase shell中運行不同的命令,我收到不同的錯誤,例如,如果我運行「list」命令,我得到ERROR:
org.apache.hadoop.hbase.MasterNotRunningException:重試7次 如果我運行Scan'test'命令,我得到錯誤:
org.apache.hadoop.hb ase.client.NoServerForRegionException:嘗試7次後無法找到測試區域,, 99999999999999。

我真的不知道該怎麼做,我一直在尋找幾天,但仍然找不到我的錯誤的確切解決方案。

爲了成功配置Hadoop和Hbase,我真的需要這方面的幫助以幫助您的專家。

這裏是我的DataNode日誌:

2013-06-11 14:21:16,703 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_3811235227329042813_1246 src: /127.0.0.1:51511 dest: /127.0.0.1:50010 
2013-06-11 14:21:16,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51511, dest: /127.0.0.1:50010, bytes: 142452, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_3811235227329042813_1246, duration: 8188439 
2013-06-11 14:21:16,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_3811235227329042813_1246 terminating 
2013-06-11 14:21:17,024 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7864325777801075696_1247 src: /127.0.0.1:51512 dest: /127.0.0.1:50010 
2013-06-11 14:21:17,034 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51512, dest: /127.0.0.1:50010, bytes: 368, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_-7864325777801075696_1247, duration: 1775491 
2013-06-11 14:21:17,035 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-7864325777801075696_1247 terminating 
2013-06-11 14:21:17,135 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_8363548489446884759_1248 src: /127.0.0.1:51513 dest: /127.0.0.1:50010 
2013-06-11 14:21:17,145 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51513, dest: /127.0.0.1:50010, bytes: 77, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_8363548489446884759_1248, duration: 1461072 
2013-06-11 14:21:17,146 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_8363548489446884759_1248 terminating 
2013-06-11 14:21:17,481 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_2254833662532666780_1249 src: /127.0.0.1:51514 dest: /127.0.0.1:50010 
2013-06-11 14:21:17,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51514, dest: /127.0.0.1:50010, bytes: 20596, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_2254833662532666780_1249, duration: 2206535 
2013-06-11 14:21:17,494 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_2254833662532666780_1249 terminating 
2013-06-11 14:21:17,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:51516, bytes: 20760, op: HDFS_READ, cliID: DFSClient_-1869746926, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_2254833662532666780_1249, duration: 3906454 
2013-06-11 14:21:18,234 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-2949992568769351385_1250 src: /127.0.0.1:51518 dest: /127.0.0.1:50010 
2013-06-11 14:21:18,244 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51518, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE, cliID: DFSClient_-163790033, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_-2949992568769351385_1250, duration: 1404625 
2013-06-11 14:21:18,245 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-2949992568769351385_1250 terminating 
2013-06-11 14:21:18,290 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:51519, bytes: 81, op: HDFS_READ, cliID: DFSClient_-1869746926, offset: 0, srvID: DS-2-192.168.168.63-50010-1370448134624, blockid: blk_8363548489446884759_1248, duration: 694149 
2013-06-11 14:22:00,557 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_3811235227329042813_1246 

TaskTrakers Log: 

2013-06-11 12:33:27,223 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting TaskTracker 
STARTUP_MSG: host = WIN-UHHLG0L1912/192.168.168.63 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 1.0.4 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 
************************************************************/ 
2013-06-11 12:33:27,676 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2013-06-11 12:33:27,812 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2013-06-11 12:33:27,815 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2013-06-11 12:33:27,815 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started 
2013-06-11 12:33:28,402 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2013-06-11 12:33:28,411 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 
2013-06-11 12:33:28,697 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
2013-06-11 12:33:28,852 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 
2013-06-11 12:33:28,954 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 
2013-06-11 12:33:28,963 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as cyg_server 
2013-06-11 12:33:28,965 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-cyg_server/mapred/local 
2013-06-11 12:33:28,982 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2013-06-11 12:33:28,984 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-cyg_server\mapred\local\taskTracker to 0755 
    at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689) 
    at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:670) 
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) 
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344) 
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189) 
    at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:723) 
    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1459) 
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3742) 

2013-06-11 12:33:28,986 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down TaskTracker at WIN-UHHLG0L1912/192.168.168.63 
************************************************************/ 

在覈心的site.xml我

<property> 
    <name>fs.default.name</name> 
    <value>hdfs://localhost:9000</value> 
</property> 

在HDFS-site.xml中我有

<property> 
    <name>dfs.replication</name> 
    <value>2</value> 
    </property> 

    <property> 
    <name>dfs.name.dir</name> 
    <value>/home/hadoop/workspace/name_dir</value> 
    </property> 

    <property> 
    <name>dfs.data.dir</name> 
    <value>/home/hadoop/workspace/data_dir</value> 
    </property> 

並在Mapred -site.xml我有

<property> 
    <name>mapred.job.tracker</name> 
    <value>localhost:9001</value> 
    </property> 

由於提前,

問候

薩爾曼

+0

向我們顯示日誌會很有幫助。另外,你爲什麼要在Windows上執行它?它總是混亂。 – Tariq

+0

感謝您的快速響應。我應該顯示哪個日誌,因爲Hbase有5個日誌,Hbase有3個日誌?那麼我從來沒有在其他操作系統,然後窗口工作,幸運的是Hadoop和Hbase也可以在Windows上工作。但我同意在窗口上使用hadoop和Hbase似乎很麻煩。 – user2304819

+0

好的......因爲多個東西似乎都被凍結了,我們先從hadoop守護進程開始,然後繼續進行hbase.show第一次向我顯示最新的datanode和tasktracker日誌,以及你的配置文件。 – Tariq

回答

0

創建一個目錄,說/home/hadoop/workspace/temp_dir此目錄作爲其值與添加屬性hadoop.tmp.dir到您core-site.xml文件。然後將/home/hadoop/workspace/data_dir/home/hadoop/workspace/temp_dir的權限更改爲755並重新啓動Hadoop。

相關問題