我想從日誌文件中解析多行數據。 我試過多行編解碼器和多行濾鏡。 但它不適合我。Logstash 1.4.1多行編解碼器不工作
日誌數據
INFO 2014-06-26 12:34:42,881 [4] [HandleScheduleRequests] Request Entity:
User Name : user
DLR : 04
Text : string
Interface Type : 1
Sender : sdr
DEBUG 2014-06-26 12:34:43,381 [4] [HandleScheduleRequests] Entitis is : 1 System.Exception
,這是配置文件
input {
file {
type => "cs-bulk"
path =>
[
"/logs/bulk/*.*"
]
start_position => "beginning"
sincedb_path => "/logstash-1.4.1/bulk.sincedb"
codec => multiline {
pattern => "^%{LEVEL4NET}"
what => "previous"
negate => true
}
}
}
output {
stdout { codec => rubydebug }
if [type] == "cs-bulk" {
elasticsearch {
host => localhost
index => "cs-bulk"
}
}
}
filter {
if [type] == "cs-bulk" {
grok {
match => { "message" => "%{LEVEL4NET:level} %{TIMESTAMP_ISO8601:time} %{THREAD:thread} %{LOGGER:method} %{MESSAGE:message}" }
overwrite => ["message"]
}
}
}
,這就是我得到的時候logstash解析多部分 它剛剛得到的第一線,並且將其標記爲多。 其他行不解析!
{
"@timestamp" => "2014-06-27T16:27:21.678Z",
"message" => "Request Entity:",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"type" => "cs-bulk",
"host" => "lab",
"path" => "/logs/bulk/22.log",
"level" => "INFO",
"time" => "2014-06-26 12:34:42,881",
"thread" => "[4]",
"method" => "[HandleScheduleRequests]"
}
我試過多行過濾器 但它不起作用。 我已經刪除了GROK過濾器 它的工作原理! 我試圖弄清楚如何讓兩個過濾器一起工作 –
我相信你現在面臨的問題是多行輸入不再與grok匹配,因爲grok不能很好地處理換行符。我在另一個q/a上看到了一個建議,建議在做grok之前使用'gsub'用空格替換'\ n'。 – Alcanzar