0
我有Redis的ELK堆棧。 方案是:logstash - > Redis的 - > logstash(索引) - > elasticsearch - > kibana彈性下降
Logstash索引從Redis的獲取數據,並把它的彈性:
input {
redis {
host=>"redis"
type=>"redis-input"
data_type=>"list"
key=>"logstash"
}
}
filter {
geoip {
source=>"ipaddr"
target=>"geoip"
database=>"/GeoLiteCity.dat"
add_field=>["[geoip][coordinates]","%{[geoip][longitude]}"]
add_field=>["[geoip][coordinates]","%{[geoip][latitude]}"]
}
mutate {
remove_field=>["message","@version","timestamp"]
convert=>{"[geoip][coordinates]"=>"float"}
}
}
output {
elasticsearch {
template=>"/typing-template.json"
template_overwrite=>true
hosts=>["elasticsearch:9200"]
}
}
有它記錄我4服務器想要收集。這裏是他們logstash的conf:
input {
file {
path => [ "C:/Program Files (x86)/*/logs/*.log", "C:/Program Files (x86)/**/logs/*.log", "C:/Program Files/***/logs/*.log", "C:/Program Files/****/logs/*.log" ]
start_position => "beginning"
type => "mtdclog"
ignore_older => 0
sincedb_path => "NUL"
}
}
filter {
grok { match => { "path" => "%{GREEDYDATA}/(?<logdate>[0-9]{8})\.log" }}
grok { match => [ "message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}%{IPV4:ipaddr}\t'%{NUMBER:account}': (?<event>login) \[ver: (?<client_build>[0-9\.]+)",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}%{IPV4:ipaddr}\t'%{NUMBER:account}': (?<event>liveupdate) '%{GREEDYDATA:data}'",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}%{IPV4:ipaddr}\t'%{NUMBER:account}': (?<event>check version)%{GREEDYDATA:data}",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}%{IPV4:ipaddr}\t'%{NUMBER:account}': %{GREEDYDATA:data}",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}(?<event>News):%{GREEDYDATA:data}",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}%{IPV4:ipaddr}\t(?<event>unknown command) (?<command_code>[A-Z0-9]+)",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}(?<event>History):%{GREEDYDATA:data}",
"message", "%{NONNEGINT:log_stream}\t%{TIME:logtime}\s{1,2}%{GREEDYDATA:log_line}",
"message", "%{GREEDYDATA:log_line}"
]
}
mutate {
add_field => { "ts" => "%{logdate} %{logtime}"}
remove_field => [ "logdate", "logtime" ]
}
date {
match => [ "ts", "YYYYMMdd HH:mm:ss.SSS" ]
target => "@timestamp"
}
if [path] =~ "Pattern1" { mutate { add_field => { "dc_type" => "Pattern1" } }}
if [path] =~ "Pattern2" { mutate { add_field => { "dc_type" => "Pattern2" } }}
mutate { remove_field => [ "message", "@version", "ts", "path", "host" ]
add_field => { "location" => "somecity" }
convert => { "log_stream" => "integer"
"client_build" => "integer"
"account" => "integer"
}
}
}
output {
redis {
host => "xxx.yyy.zzz.aaa"
port => "6381"
data_type => "list"
key => "logstash" }
任務: 我要處理1個月期舊日誌。它大約有35MB的日誌文件。所以4臺服務器的總數大概是140MB,而不是那麼多。
問題: 然後我開始logstash服務 - 一切都很好,並且可以正常工作4-5個小時。我在kibana中看到解析的數據並可以使用它。但隨後彈性下降。 消息是「請求30000ms後超時」。
相同的ELK堆棧我正在使用其他服務器和logstash配置 - 它可以工作並處理更多的日誌行。但我無法理解這種情況下的麻煩。