我試圖插入到具有動態分區的配置單元表中。相同的查詢在過去的幾天內運行良好,但現在正在給出下面的錯誤。配置單元設置hive.optimize.sort.dynamic.partition
Diagnostic Messages for this Task: java.lang.RuntimeException:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error:
Unable to deserialize reduce input key from
x1x128x0x0x46x234x240x192x148x1x68x69x86x50x0x1x128x0x104x118x1x128x0x0x46x234x240x192x148x1x128x0x0x25x1x128x0x0x46x1x128x0x0x72x1x127x255x255x255x0x0x0x0x1x71x66x80x0x255
with properties
{columns=reducesinkkey0,reducesinkkey1,reducesinkkey2,reducesinkkey3,reducesinkkey4,reducesinkkey5,reducesinkkey6,reducesinkkey7,reducesinkkey8,reducesinkkey9,reducesinkkey10,reducesinkkey11,reducesinkkey12,
serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
serialization.sort.order=+++++++++++++,
columns.types=bigint,string,int,bigint,int,int,int,string,int,string,string,string,string}
at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
at org.apache.hadoop.mapred.Child$4.run(Child.java
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.33 sec HDFS
Read: 889 HDFS Write: 314 SUCCESS Stage-Stage-2: Map: 1 Reduce: 1
Cumulative CPU: 1.42 sec HDFS Read: 675 HDFS Write: 0 FAIL
當我使用下面的設置,查詢運行良好
set hive.optimize.sort.dynamic.partition=false
,當我這個值設置爲true,它給出了同樣的錯誤。
源表格存儲在序列格式中,目標表格存儲在RC格式中。 任何人都可以解釋這個設置在內部有什麼不同嗎?
類似的帖子http://stackoverflow.com/questions/32236798/hive-runtime-error-unable-to-deserialize-reduce-input-key – madhu
是的,我通過了帖子。但是關於這個設置的行爲的任何解釋? –