2015-04-12 110 views
0

我需要解析下面的日誌文件,在腳本中應該考慮從時間戳記150324-21:06:32:937378開始直到下一個時間戳開始爲一條記錄。我嘗試使用庫在豬中使用正則表達式解析日誌文件

org.apache.pig.piggybank.storage.MyRegExLoader 

加載自定義格式的記錄。

150324-21:06:32:937378 [mod=STB, lvl=INFO ] 
    top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72 
    Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie 
    Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st 
    Mem: 317108k total, 232588k used, 84520k free, 25960k buffers 
    Swap:  0k total,  0k used,  0k free, 110820k cached 
     PID USER  PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND    
    19122 root  20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver   
    5859 root  20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer 
    150324-21:06:32:937378 [mod=STB, lvl=INFO ] 
    top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72 
    Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie 
    Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st 
    Mem: 317108k total, 232588k used, 84520k free, 25960k buffers 
    Swap:  0k total,  0k used,  0k free, 110820k cached 
     PID USER  PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND    
    19122 root  20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver   
    5859 root  20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer 

這裏是我的相關代碼片段,我用

raw_logs = LOAD './main*/*top_log*' USING org.apache.pig.piggybank.storage.MyRegExLoader('(?m)(?s)\\d*-\\d{2}:\\d{2}:\\d{2}\\:\\d*.*') AS line:chararray ; DUMP raw_logs; 

這裏是我的輸出:

(150325-05:47:26:253050 [mod=STB, lvl=INFO ]) 
(150325-05:57:27:294069 [mod=STB, lvl=INFO ]) 
(150325-06:07:28:235302 [mod=STB, lvl=INFO ]) 
(150325-06:17:29:124282 [mod=STB, lvl=INFO ]) 
(150325-06:27:30:036264 [mod=STB, lvl=INFO ]) 
(150325-06:37:30:941804 [mod=STB, lvl=INFO ]) 
(150325-06:47:31:909712 [mod=STB, lvl=INFO ]) 

應該是這樣的2元組

(150324-21:06:32:937378 [mod=STB, lvl=INFO ] 
top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72 
Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie 
Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st 
Mem: 317108k total, 232588k used, 84520k free, 25960k buffers 
Swap:  0k total,  0k used,  0k free, 110820k cached 
    PID USER  PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND    
19122 root  20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver   
5859 root  20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer) 
(150324-21:06:32:937378 [mod=STB, lvl=INFO ] 
top - 21:06:33 up 3:41, 0 users, load average: 0.75, 0.95, 0.72 
Tasks: 120 total, 3 running, 117 sleeping, 0 stopped, 0 zombie 
Cpu(s): 21.8%us, 12.9%sy, 2.9%ni, 60.7%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st 
Mem: 317108k total, 232588k used, 84520k free, 25960k buffers 
Swap:  0k total,  0k used,  0k free, 110820k cached 
    PID USER  PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND    
19122 root  20 0 456m 72m 37m R 72 23.5 85:50.22 Receiver   
5859 root  20 0 349m 9128 6948 S 15 2.9 22:42.88 rmfStreamer) 

請讓我知道正則表達式我可以使用,所以米y腳本考慮時間戳的開始,直到下一個時間戳記錄的開始。

回答

0

嘗試匹配組以下正則表達式:

([0-9]{6}-[0-9]{2}:[0-9]{2}:[0-9]{2}:[0-9]+ \[mod=[\s\S]*)[0-9]{6}-[0-9]{2}:[0-9]{2}:[0-9]{2}:[0-9]+ \[mod= 
+0

謝謝你。我嘗試過,但它不工作我沒有得到任何使用上述表達式匹配。 – evoluguy

+1

請參閱鏈接:https://regex101.com/r/mD7mE9/1 –

0

我不認爲這是可能使用豬。 您將需要一個自定義記錄閱讀器,它使用正則表達式按照第一條記錄的時間戳分割文件。

我希望下面的鏈接將幫助你寫一個 https://hadoopi.wordpress.com/2013/05/31/custom-recordreader-processing-string-pattern-delimited-records/

,您可能需要調整它的一些邏輯來獲取每行的時間戳

if (m.matches()) { 
     // Record delimiter 
     delimiterString=tmp; 
     break; 
    } else { 
     // Append value to record 

     text.append(EOL.getBytes(), 0, EOL.getLength()); 
     text.append(tmp.getBytes(), 0, tmp.getLength()); 
     text.append(delimiterString.getBytes(), 0, delimiterString.getLength()); 
    } 

結果,會出現以下 top - 02:10:39 up 0 min,0 users,load average:2.26,0.54,0.18150323-02:10:3​​7:619962 [mod = STB,lvl = INFO] 任務:總計133次,跑步6次,睡覺127次,0停止,0 zombie150323-02:10:3​​7:619962 [mod = STB,lvl = INFO]

+0

正則表達式將this.delimiterRegex =「^ [0-9] {6} - [0-9] {2}:[0-9 ] {2}:[0-9] {2}:[0-9] {6} * $「; –