2017-03-06 87 views
0


我是新來的ELK堆棧,我有LogStash發送數據從MySQL到ElasticSearch,並在終端上,它看起來它已經發送了所有40,000條記錄但是當我去看看Kibana時,我發現只有200條記錄被輸入。
這是我使用的LogStash配置文件。並非所有來自logstash的數據都被索引indexsearch

# file: simple-out.conf 
input { 
    jdbc { 
     # Postgres jdbc connection string to our database, mydb 
     jdbc_connection_string => "jdbc:mysql://localhost:3306/tweets_articles" 
     # The user we wish to execute our statement as 
     jdbc_user => "root" 
     # The path to our downloaded jdbc driver 
     jdbc_driver_library => "/etc/elasticsearch/elasticsearch-jdbc-2.3.3.1/lib/mysql-connector-java-5.1.38.jar" 
     # The name of the driver class for Postgresql 
     jdbc_driver_class => "com.mysql.jdbc.Driver" 
     jdbc_user => "**" 
     jdbc_password => "***" 
     # our query 
     statement => "SELECT * from tweets" 
    } 
} 
output { 
     elasticsearch { hosts => ["localhost:9200"] } 
     stdout { codec => rubydebug } 
} 

這是日期問題嗎?在MySQL中,當我打印這個格式的記錄時。

+---------------------+ 
| PUBLISHED_AT  | 
+---------------------+ 
| 2017-03-06 03:43:51 | 
| 2017-03-06 03:43:45 | 
| 2017-03-06 03:43:42 | 
| 2017-03-06 03:43:30 | 
| 2017-03-06 03:43:00 | 
+---------------------+ 
5 rows in set (0.00 sec) 

但是,當我看到終端中的配置輸出它看起來像這樣。

   "id" => 41298, 
     "author" => "b'Terk'", 
    "retweet_count" => "0", 
"favorite_count" => "0", 
"followers_count" => "49", 
    "friends_count" => "23", 
      "body" => "create an ad", 
    "published_at" => "2017-03-06T07:30:47.000Z", 
     "@version" => "1", 
    "@timestamp" => "2017-03-06T06:44:04.756Z" 

其他人能看到爲什麼我無法獲得所有40,000條記錄嗎?
謝謝。

+0

您是否將您的kibana配置爲始終搜索所有日誌條目? – pandaadb

+0

我用數字海洋的ELK堆棧液滴不太清楚它是如何配置的。我會怎麼做? – Definity

+1

在右上角(我相信),您可以定義您正在搜索的時間範圍 – pandaadb

回答

相關問題