2016-08-01 55 views
0

我有下面的示例數據如何分割HDFS文件分成多個目錄中使用的數據

一個HDFS文件中的一些關鍵

ID名稱時間戳
1的Lorem 2013-01-01
2存有2013-02- 01
3存有2013-03-01

現在我想如記錄1進入目錄/data/2016/01/01在多個目錄中的數據拆分格式/data/YYYY/MM/DD

豬有MultiStorage UDF,可以按年或月或日期分割成單個目錄。有什麼辦法可以分解成多個目錄?

回答

0

你能在timestamp列,並通過使用HCatStorer店只小豬的數據創建一個蜂巢分區表

這樣,您可能無法獲得您所選擇的目錄,但您可以按照您的要求獲取多個目錄中的數據。

2

你可以從這三個方法中進行選擇:

  1. 您可以編寫shell腳本來完成這個任務
  2. 你可以寫MapReduce工作與分區-ER類
  3. 您可以創建蜂巢分區表和按年,月,日適用的分區,但隨後目錄名稱將具有partition column name=在目錄名前綴:/data/year=2016/month=01/date=07

讓我知道,這曾經approa你更喜歡,我會用一個基於這個例子的例子來更新答案。

更新與殼腳本溶液:

給定兩個輸入/源文件與相同的內容在HDFS:

[[email protected] ~]$ hadoop fs -ls /user/cloudera/test_dir 
Found 2 items 
-rw-r--r-- 1 cloudera cloudera   79 2016-08-02 04:43 /user/cloudera/test_dir/test.file1 
-rw-r--r-- 1 cloudera cloudera   79 2016-08-02 04:43 /user/cloudera/test_dir/test.file2 

殼腳本:

#!/bin/bash 
# Assuming src files are in hdfs, for local src file 
# processing change the path and command accordingly 
# if you do NOT want to write header in each target file 
# then you can comment the writing header part from below script 

src_file_path='/user/cloudera/test_dir' 
trg_file_path='/user/cloudera/trgt_dir' 

src_files=`hadoop fs -ls ${src_file_path}|awk -F " " '{print $NF}'|grep -v items` 

for src_file in $src_files 
do 
    echo processing ${src_file} file... 

    while IFS= read -r line 
    do 
     #ignore header from processing - that contains *id* 
     if [[ $line != *"id"* ]];then 

     DATE=`echo $line|awk -F " " '{print $NF}'` 
     YEAR=`echo $DATE|awk -F "-" '{print $1}'` 
     MONTH=`echo $DATE|awk -F "-" '{print $2}'` 
     DAY=`echo $DATE|awk -F "-" '{print $3}'` 
       file_name="file_${DATE}" 

     hadoop fs -test -d ${trg_file_path}/$YEAR/$MONTH/$DAY 

     if [ $? != 0 ];then 
      echo "dir not exist creating... ${trg_file_path}/$YEAR/$MONTH/$DAY " 
      hadoop fs -mkdir -p ${trg_file_path}/$YEAR/$MONTH/$DAY 
     fi 


     hadoop fs -test -f ${trg_file_path}/$YEAR/$MONTH/$DAY/$file_name 

       if [ $? != 0 ];then 
        echo "file not exist: creating header... ${trg_file_path}/$YEAR/$MONTH/$DAY/$file_name" 
        echo "id name timestamp" |hadoop fs -appendToFile - ${trg_file_path}/$YEAR/$MONTH/$DAY/$file_name 
       fi 

     echo "writing line: \'$line\' to file: ${trg_file_path}/$YEAR/$MONTH/$DAY/$file_name" 
     echo $line |hadoop fs -appendToFile - ${trg_file_path}/$YEAR/$MONTH/$DAY/$file_name 
     fi 
    done < <(hadoop fs -cat $src_file) 
done 

manageFiles.sh scrip t跑作爲:

[[email protected] ~]$ ./manageFiles.sh 
processing /user/cloudera/test_dir/test.file1 file... 
dir not exist creating... /user/cloudera/trgt_dir/2013/01/01 
file not exist: creating header... /user/cloudera/trgt_dir/2013/01/01/file_2013-01-01 
writing line: '1 Lorem 2013-01-01' to file: /user/cloudera/trgt_dir/2013/01/01/file_2013-01-01 
dir not exist creating... /user/cloudera/trgt_dir/2013/02/01 
file not exist: creating header... /user/cloudera/trgt_dir/2013/02/01/file_2013-02-01 
writing line: '2 Ipsum 2013-02-01' to file: /user/cloudera/trgt_dir/2013/02/01/file_2013-02-01 
dir not exist creating... /user/cloudera/trgt_dir/2013/03/01 
file not exist: creating header... /user/cloudera/trgt_dir/2013/03/01/file_2013-03-01 
writing line: '3 Ipsum 2013-03-01' to file: /user/cloudera/trgt_dir/2013/03/01/file_2013-03-01 
processing /user/cloudera/test_dir/test.file2 file... 
writing line: '1 Lorem 2013-01-01' to file: /user/cloudera/trgt_dir/2013/01/01/file_2013-01-01 
writing line: '2 Ipsum 2013-02-01' to file: /user/cloudera/trgt_dir/2013/02/01/file_2013-02-01 
writing line: '3 Ipsum 2013-03-01' to file: /user/cloudera/trgt_dir/2013/03/01/file_2013-03-01 

[[email protected] ~]$ hadoop fs -cat /user/cloudera/trgt_dir/2013/03/01/file_2013-03-01 
id name timestamp 
3 Ipsum 2013-03-01 
3 Ipsum 2013-03-01 
[[email protected] ~]$ 
+0

謝謝。 Hive文件格式看起來很好只使用hive分區表的問題是它將以ORC文件格式存儲文件,我想使用文本格式。你還可以告訴我什麼是可以幫我完成工作的shell腳本。 –

+1

查看更新的答案 –

相關問題