2014-01-13 141 views
1

我有如下表:蜂巢collect_set崩潰查詢

hive> describe tv_counter_stats; 
OK 
day  string 
event string 
query_id  string 
userid string 
headers  string 

而且我想執行以下查詢:

hive -e 'SELECT 
    day, 
    event, 
    query_id, 
    COUNT(1) AS count, 
    COLLECT_SET(userid) 
FROM 
    tv_counter_stats 
GROUP BY 
    day, 
    event, 
    query_id;' > counter_stats_data.csv 

然而,查詢失敗。但以下查詢工作正常:

hive -e 'SELECT 
    day, 
    event, 
    query_id, 
    COUNT(1) AS count 
FROM 
    tv_counter_stats 
GROUP BY 
    day, 
    event, 
    query_id;' > counter_stats_data.csv 

其中我刪除collect_set命令。所以我的問題:有沒有人知道爲什麼collect_set可能在這種情況下失敗?

UPDATE:錯誤消息說:

Diagnostic Messages for this Task: 

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask 
MapReduce Jobs Launched: 
Job 0: Map: 3 Reduce: 1 Cumulative CPU: 10.49 sec HDFS Read: 109136387 HDFS Write: 0 FAIL 
Total MapReduce CPU Time Spent: 10 seconds 490 msec 

java.lang.Throwable: Child Error 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) 
Caused by: java.io.IOException: Task process exit with nonzero status of 1. 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237) 

Error: GC overhead limit exceeded 
java.lang.Throwable: Child Error 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) 
Caused by: java.io.IOException: Task process exit with nonzero status of 1. 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237) 

Error: GC overhead limit exceeded 

更新2: 我改變了查詢,使得它現在這個樣子:

hive -e ' 
SET mapred.child.java.opts="-server -Xmx1g -XX:+UseConcMarkSweepGC"; 
SELECT 
    day, 
    event, 
    query_id, 
    COUNT(1) AS count, 
    COLLECT_SET(userid) 
FROM 
    tv_counter_stats 
GROUP BY 
    day, 
    event, 
    query_id;' > counter_stats_data.csv 

然而,然後我得到以下錯誤:

Diagnostic Messages for this Task: 
java.lang.Throwable: Child Error 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) 
Caused by: java.io.IOException: Task process exit with nonzero status of 1. 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237) 


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask 
MapReduce Jobs Launched: 
Job 0: Map: 3 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL 
Total MapReduce CPU Time Spent: 0 msec 
+0

您可以添加失敗消息嗎? –

+0

好的,我添加了錯誤信息 – toom

回答

1

這可能是內存問題,因爲collect_set聚合了內存中的數據。

嘗試增加堆大小並啓用併發GC(通過將Hadoop mapred.child.java.opts設置爲例如-Xmx1g -XX:+UseConcMarkSweepGC)。

This answer有關於「GC開銷限制」錯誤的更多信息。

+0

Thx作爲答案。我更新了我的問題(更新2) – toom

1

我有同樣的確切問題,並遇到這個問題,所以我想我會分享我找到的解決方案。

底層問題很可能是Hive試圖在映射器端進行聚合,並且它用於管理內存中hashmaps的啓發式方法被「寬而淺」的數據拋出, - 即在你的情況下,如果每天/ event/query_id組的user_id值非常少。

我找到了一個article,解釋瞭解決這個問題的各種方法,但其中大多數只是對全面核選項的優化:完全禁用映射器端聚合。

使用set hive.map.aggr = false;應該做的伎倆。