2017-05-05 31 views
0

我目前正在進行PoC ELK安裝,並且想重新發送在Filebeat中註冊的文件的每個日誌行以用於測試目的。刪除註冊表時重複的日誌事件

這是我做的:

  1. 我停止Filebeat
  2. 我刪除Logstash指數通過Kibana
  3. 我刪除Filebeat註冊表文件
  4. 我開始Filebeat

在Kibana中,我可以看到日誌行數是事件的兩倍,我也可以看到每個事件都被複制過一次。

這是爲什麼?

Filebeat日誌:

2017-05-05T14:25:16+02:00 INFO Setup Beat: filebeat; Version: 5.2.2 
2017-05-05T14:25:16+02:00 INFO Max Retries set to: 3 
2017-05-05T14:25:16+02:00 INFO Activated logstash as output plugin. 
2017-05-05T14:25:16+02:00 INFO Publisher name: anonymized 
2017-05-05T14:25:16+02:00 INFO Flush Interval set to: 1s 
2017-05-05T14:25:16+02:00 INFO Max Bulk Size set to: 2048 
2017-05-05T14:25:16+02:00 INFO filebeat start running. 
2017-05-05T14:25:16+02:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file. 
2017-05-05T14:25:16+02:00 INFO Loading registrar data from /var/lib/filebeat/registry 
2017-05-05T14:25:16+02:00 INFO States Loaded from registrar: 0 
2017-05-05T14:25:16+02:00 INFO Loading Prospectors: 1 
2017-05-05T14:25:16+02:00 INFO Prospector with previous states loaded: 0 
2017-05-05T14:25:16+02:00 INFO Loading Prospectors completed. Number of prospectors: 1 
2017-05-05T14:25:16+02:00 INFO All prospectors are initialised and running with 0 states to persist 
2017-05-05T14:25:16+02:00 INFO Starting Registrar 
2017-05-05T14:25:16+02:00 INFO Start sending events to output 
2017-05-05T14:25:16+02:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s 
2017-05-05T14:25:16+02:00 INFO Starting prospector of type: log 
2017-05-05T14:25:16+02:00 INFO Harvester started for file: /some/where/anonymized.log 
2017-05-05T14:25:46+02:00 INFO Non-zero metrics in the last 30s: registrar.writes=2 libbeat.logstash.publish.read_bytes=54 libbeat.logstash.publish.write_bytes=32390 libbeat.logstash.published_and_acked_events=578 filebeat.harvester.running=1 registar.states.current=1 libbeat.logstash.call_count.PublishEvents=1 libbeat.publisher.published_events=578 publish.events=579 filebeat.harvester.started=1 registrar.states.update=579 filebeat.harvester.open_files=1 
2017-05-05T14:26:16+02:00 INFO No non-zero metrics in the last 30s 

回答

0

刪除註冊表文件創建的問題。

Filebeat管理文件的狀態以及探測器(在內存中)和註冊表文件(保存在磁盤中)的事件ACK。

請閱讀文檔Here

您可以管理自己的每個事件的_id領域,使得被複制(以任何理由,即使在生產環境)任何情況下將不會有他們兩個在elasticsearch ,但會更新事件。

在logstash管道配置文件中創建以下配置。

#if your logs don't have a unique ID, use the following to generate one 
fingerprint{ 
     #with the message field or choose other(s) that can give you a uniqueID 
     source => ["message"] 
     target => "LogID" 
     key => "something" 
     method => "MD5" 
     concatenate_sources => true 
      } 
#in your output section 
    elasticsearch{ 
        hosts => ["localhost:9200"] 
        document_id => "%{LogID}" 
        index => "yourindex" 
     }