2017-07-18 62 views
0

我注意到由filestat正確定義的@timestamp字段由logstash自動更改,其值由日誌時間戳記值(字段名稱爲a_timestamp)替換。 這裏是logstash調試日誌的一部分:Logstash更改從filebeat接收到的原始@timestamp值

[2017-07-18T11:55:03598] [DEBUG] [logstash.pipeline]過濾器接收{ 「事件」=> { 「@ ****時間戳」 =「2017-07-18T09:54:53.507Z」offset「=> 498」@version「=>」1「,」input_type「=>」log「,」beat「=> {」hostname「=」 >「centos-ea」,「name」=>「filebeat_shipper_kp」,「version」=>「5.5.0」},「host」=>「centos-ea」,「source」=>「/ home/elastic/ELASTIC_NEW/log_bw/test.log「,」message「=>」2017-06-05 19:02:46信息[bwEngThread:In-Memory Process Worker-4] psg.logger - a_applicationName = \「PieceProxy \」,a_processName = \「piece.PieceProxy \」,a_jobId = \「bw0a10ao \」,a_processInstanceId = \「bw0a10ao \」,a_level = \「Info \」,a_phase = \「ProcessStart \」,a_activityName = \「SetAndLog \」,a_timeStamp = \「2017-06-05T19:02:46.779 \」,a_sessionId = \「\」,a_sender = \「PCS \」,a_cruid = \「37d7e225-bbe5-425b-8abc-f4b44a5a1560 \」,a_MachineCode = \「CFDM7757 \」,a_correlationId = \「fa10f \」,a_trackingId = \「 9d3b8 \「,a_message = \」START = piece.PieceProxy \「」,「type」=>「log」,「tags」=> [「beats_input_codec_plain_applied」]}}

[2017-07-18T11:55 :03,629] [DEBUG] [logstash.pipeline] output received {「event」=> {「a_message」=>「START = piece.PieceProxy」,「log」=>「INFO」,「bwthread」=>「[bwEngThread :內存進程worker-4]「,」logger「=>」psg.logger「,」a_correlationId「=>」fa10f「,」source「=>」/ home/elastic/ELASTIC_NEW/log_bw/test.log「 ,「a_trackingId」=>「9d3b8」,「type」=>「log」,「a_sessionId」=>「\」\「」,「a_sender」=>「PCS」,「@version」=>「1」 「beat」=> {「hostname」=>「centos-ea」,「name」=>「filebeat_shipper_kp」,「version」=>「5.5.0」},「host」=>「centos-ea」 a_level「=>」Info「,」a_processName「=>」piece.PieceProxy「,」a_cruid「=>」37d7e225-bbe5-425b-8abc-f4b44a5a1560「,」a_activity名稱「=>」SetAndLog「,」offset「=> 498,」a_MachineCode「=>」CFDM7757「,」input_type「=>」log「,」message「=>」信息[bwEngThread:在-記憶過程工人-4] psg.logger - a_applicationName = \ 「PieceProxy \」,a_processName = \ 「piece.PieceProxy \」,a_jobId = \ 「bw0a10ao \」,a_processInstanceId = \ 「bw0a10ao \」,a_level = \「Info \」,a_phase = \「ProcessStart \」,a_activityName = \「SetAndLog \」,a_timeStamp = \「2017-06-05T19:02:46.779 \」,a_sessionId = \「\」,a_sender = \「 PCS_「,a_cruid =」37d7e225-bbe5-425b-8abc-f4b44a5a1560「,a_MachineCode = \」CFDM7757 \「,a_correlationId = \」fa10f \「,a_trackingId = \」9d3b8 \「,a_message = \」START = piece.PieceProxy \「」,「a_phase」=>「ProcessStart」,「tags」=> [「beats_input_codec_plain_applied」,「_dateparsefailure」,「kv_ok」,「taskStarted」],「a_processInstanceId」=>「bw0a10ao」,「@timestamp 「=> 2017-06-05T17:02:46.779Z, 」my_index「=>」 bw_logs 「 」a_timeStamp「=> 」2017-06-05T19:02:46.779「, 」a_jobId「=>」 bw0a10ao 「,」a_applicationName「=>」PieceProxy「 「TMS」=> 「2017年6月5日19:02:46.779」}}

NB:

  1. 我注意到,這並不簡單管道發生(不神交,KV和我在自定義管道中使用的其他插件)
  2. 我將filebeat的屬性json.overwrite_keys更改爲TRUE,但沒有成功。

你能解釋我爲什麼與@_timestamp變化,會發生什麼?我不指望它會自動完成(我看到很多人問如何做)因爲@timestamp是一個系統字段..那有什麼問題?

這裏是我的管道:

input { 
    beats { 
     port => "5043" 
     type => json 
    } 
} 
filter {  
     #date { 
     # match => [ "@timestamp", "ISO8601" ] 
     # target => "@timestamp" 
     #} 

    if "log_bw" in [source] { 
       grok { 
        patterns_dir => ["/home/elastic/ELASTIC_NEW/logstash-5.5.0/config/patterns/extrapatterns"] 
        match => { "message" => "%{CUSTOM_TMS:TMS}\s*%{CUSTOM_LOGLEVEL:log}\s*%{CUSTOM_THREAD:bwthread}\s*%{CUSTOM_LOGGER:logger}-%{CUSTOM_TEXT:text}" }  
        tag_on_failure => ["no_match"] 
       } 

       if "no_match" not in [tags] { 

        if "Payload for Request is" in [text] { 

         mutate { 
          add_field => { "my_index" => "json_request" } 
         }          

         grok { 
          patterns_dir => ["/home/elastic/ELASTIC_NEW/logstash-5.5.0/config/patterns/extrapatterns"] 
          match => { "text" => "%{CUSTOM_JSON:json_message}" } 
         } 

         json { 
          source => "json_message" 
          tag_on_failure => ["errore_parser_json"] 
          target => "json_request" 
         } 

         mutate { 
          remove_field => [ "json_message", "text" ] 
         } 
        } 
        else { 

         mutate { 
          add_field => { "my_index" => "bw_logs" } 
         } 

         kv { 
          source => "text" 
          trim_key => "\s" 
          field_split => "," 
          add_tag => [ "kv_ok" ] 
         } 

         if "kv_ok" not in [tags] { 
          drop { } 
         } 

         else { 

          mutate { 
           remove_field => [ "text" ] 
          } 

          if "ProcessStart" in [a_phase] { 
           mutate { 
            add_tag => [ "taskStarted" ] 
           } 
          } 

          if "ProcessEnd" in [a_phase] { 
           mutate { 
            add_tag => [ "taskTerminated" ] 
           } 
          } 

          date { 
           match => [ "a_timeStamp", "yyyy'-'MM'-'dd'T'HH:mm:ss.SSS" ] 
          } 

          elapsed { 
           start_tag => "taskStarted" 
           end_tag => "taskTerminated" 
           unique_id_field => "a_cruid" 
          } 
         } 
        }  
       } 
    } 
    else { 

     mutate { 
      add_field => { "my_index" => "other_products" } 
     } 
    } 
} 
output { 

     elasticsearch { 
      index => "%{my_index}" 
      hosts => ["localhost:9200"] 
     } 

     stdout { codec => rubydebug } 

     file { 
      path => "/tmp/loggata.tx" 
      codec => json 
     } 
} 

非常感謝你,

安德烈

回答

0

這是錯誤(從之前的測試中一個錯字):

date { 
    match => [ "a_timeStamp", "yyyy'-'MM'-'dd'T'HH:mm:ss.SSS" ] 
} 

謝謝無論如何,你們!

相關問題