2015-01-20 102 views
0

當我嘗試使用Hive邏輯表從HDFS將數據加載到HBase時,我遇到以下問題。我是新的Hadoop和不能追蹤錯誤,。我現在用CDH4 VM,HBase Hive集成 - 錯誤

創建這是由蜂巢

CREATE TABLE hive_hbasetable(key int, value string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") 
TBLPROPERTIES ("hbase.table.name" = "hivehbasek1"); 

HBase的外殼輸出

hbase(main):002:0> list 
TABLE 
hivebasek1 
mysql_cityclimate 

2 row(s) in 0.2470 seconds 
管理的新HBase的表

我在hive_logictab

CREATE TABLE hive_logictable (foo INT, bar STRING) row format delimited fields terminated by ','; 

插入數據而創建的邏輯表hive_logictable在蜂房le HDFS。

cat TextFile.txt 
100,value1 
101,value2 
102,value3 
103,value4 
104,value5 
105,value6 

LOAD DATA LOCAL INPATH '/home/cloudera/TextFile.txt' OVERWRITE INTO TABLE hive_logictable; 

使用Hive將數據加載到HBase表中。

INSERT OVERWRITE TABLE hive_hbasetable SELECT * FROM hive_logictable; 

以下是在錯誤消息投擲....

Total MapReduce jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks is set to 0 since there's no reduce operator 
Starting Job = job_201501200937_0004, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201501200937_0004 
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201501200937_0004 
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0 
2015-01-20 10:38:07,412 Stage-0 map = 0%, reduce = 0% 
2015-01-20 10:38:52,822 Stage-0 map = 100%, reduce = 100% 
Ended Job = job_201501200937_0004 with errors 
Error during job, obtaining debugging information... 
Job Tracking URL: http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201501200937_0004 
Examining task ID: task_201501200937_0004_m_000002 (and more) from job job_201501200937_0004 

Task with the most failures(4): 
----- 
Task ID: 
    task_201501200937_0004_m_000000 

URL: 
    http://localhost.localdomain:50030/taskdetails.jsp?jobid=job_201501200937_0004&tipid=task_201501200937_0004_m_000000 
----- 
Diagnostic Messages for this Task: 
java.lang.RuntimeException: Error in configuring object 
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) 
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) 
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:413) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332) 
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438) 
    at org.apache.hadoop.mapred.Child.main(Child.java:262) 
Caused by: java.lang.reflect.InvocationTargetException 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja 

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask 
MapReduce Jobs Launched: 
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL 
Total MapReduce CPU Time Spent: 0 msec 

結束錯誤消息的。

回答

0

你能否檢查一下原子插入在HIVE表格上是否工作正常?並分享結果?