2012-06-26 112 views
5

我想知道是否有可能在Apache Pig中一次傳遞一個表。Apache Pig的樞軸表

輸入:

Id Column1 Column2 Column3 
1  Row11 Row12 Row13 
2  Row21 Row22 Row23 

輸出:

Id Name  Value 
1  Column1 Row11 
1  Column2 Row12 
1  Column3 Row13 
2  Column1 Row21 
2  Column2 Row22 
2  Column3 Row23 

實際數據有幾十列。

我可以用awk在一個通道中執行該操作,然後使用Hadoop Streaming運行它。但我的大部分代碼是Apache Pig,所以我想知道是否有可能在Pig中高效地完成它。

回答

7

你可以用兩種方法做: 1.編寫一個UDF,它返回一包元組。它將是最靈活的解決方案,但需要Java代碼; 2.編寫一個腳本剛性:

inpt = load '/pig_fun/input/pivot.txt' as (Id, Column1, Column2, Column3); 
bagged = foreach inpt generate Id, TOBAG(TOTUPLE('Column1', Column1), TOTUPLE('Column2', Column2), TOTUPLE('Column3', Column3)) as toPivot; 
pivoted_1 = foreach bagged generate Id, FLATTEN(toPivot) as t_value; 
pivoted = foreach pivoted_1 generate Id, FLATTEN(t_value); 
dump pivoted; 

運行此腳本讓我下面的結果:

(1,Column1,11) 
(1,Column2,12) 
(1,Column3,13) 
(2,Column1,21) 
(2,Column2,22) 
(2,Column3,23) 
(3,Column1,31) 
(3,Column2,32) 
(3,Column3,33) 
3

我刪除COL3來自ID,以顯示如何處理可選(NULL)數據

編號名稱值 1 column1的Row11 1列2 Row12 2 column1的Row21 2列2 Row22 2欄3 Row23

--pigscript.pig

data1  = load 'data.txt' using PigStorage() as (id:int, key:chararray, value:chararray); 
grped  = group data1 by id; 
pvt   = foreach grped { 
    col1  = filter data1 by key =='Column1'; 
    col2  =filter data1 by key =='Column2'; 
    col3  =filter data1 by key =='Column3'; 
    generate flatten(group) as id, 
     flatten(col1.value) as col1, 
     flatten(col2.value) as col2, 
     flatten((IsEmpty(col3.value) ? {('NULL')} : col3.value)) as col3; --HANDLE NULL 
}; 
dump pvt; 

結果:

(1,Row11,Row12,NULL)

(2,Row21,Row22,Row23)