2016-10-03 63 views
0

我正在運行一個2節點的elasticsearch集羣,並將所有索引配置爲2個主碎片和1個副本。起初我以爲每個節點都會存儲1個主碎片和1個副本,儘管這不是偶然發生的事情。Elasticsearch沒有正確分配碎片和副本

curl -XGET http://localhost:9200/_cat/shards 
.kibana     0 p STARTED  1 3.1kb 10.151.6.98 Eleggua 
.kibana     0 r UNASSIGNED 
logstash-sflow-2016.10.03 1 p STARTED  738 644.4kb 10.151.6.98 Eleggua 
logstash-sflow-2016.10.03 1 r UNASSIGNED 
logstash-sflow-2016.10.03 0 p STARTED  783 618.4kb 10.151.6.98 Eleggua 
logstash-sflow-2016.10.03 0 r UNASSIGNED 
logstash-ipf-2016.10.03 1 p STARTED 8480 3.9mb 10.151.6.98 Eleggua 
logstash-ipf-2016.10.03 1 r UNASSIGNED 
logstash-ipf-2016.10.03 0 p STARTED 8656 6.3mb 10.151.6.98 Eleggua 
logstash-ipf-2016.10.03 0 r UNASSIGNED 
logstash-raw-2016.10.03 1 p STARTED  254 177.9kb 10.151.6.98 Eleggua 
logstash-raw-2016.10.03 1 r UNASSIGNED 
logstash-raw-2016.10.03 0 p STARTED  274 180kb 10.151.6.98 Eleggua 
logstash-raw-2016.10.03 0 r UNASSIGNED 
logstash-pf-2016.10.03 1 p STARTED 4340 2.9mb 10.151.6.98 Eleggua 
logstash-pf-2016.10.03 1 r UNASSIGNED 
logstash-pf-2016.10.03 0 p STARTED 4234 5.7mb 10.151.6.98 Eleggua 
logstash-pf-2016.10.03 0 r UNASSIGNED 

如上所示,每個分片由單個節點託管,並且沒有分配副本。

curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true' 
{ 
    "cluster_name" : "es_gts_seginfo", 
    "status" : "yellow", 
    "timed_out" : false, 
    "number_of_nodes" : 2, 
    "number_of_data_nodes" : 2, 
    "active_primary_shards" : 9, 
    "active_shards" : 9, 
    "relocating_shards" : 0, 
    "initializing_shards" : 0, 
    "unassigned_shards" : 9, 
    "delayed_unassigned_shards" : 0, 
    "number_of_pending_tasks" : 0, 
    "number_of_in_flight_fetch" : 0, 
    "task_max_waiting_in_queue_millis" : 0, 
    "active_shards_percent_as_number" : 50.0 
} 

我在做什麼錯?

+1

你能發佈你的集羣設置嗎?你是否在彈性搜索日誌中看到任何東西? 'curl -XPOST「的輸出是什麼?http:// localhost:9200/_cluster/reroute?explain' –

+1

您是否嘗試過這裏提到的步驟:http://stackoverflow.com/a/23816954/689625。對每個節點的分片數量進行限制?https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-total-shards.html – jay

+2

您可以顯示節點的網絡配置嗎?他們「看到」對方,即他們是否發現了對方? – Val

回答

0

感謝大家,我能弄清楚這個問題。我的一個節點運行2.4.0,另一個運行2.4.1。這種重新路由不能正常工作。

curl -XPOST -d '{ "commands" : [ { 
> "allocate" : { 
>  "index" : ".kibana", 
>  "shard" : 0, 
>  "node" : "proc-gts-elk01", 
>  "allow_primary":true 
>  } 
> } ] }' http://localhost:9200/_cluster/reroute?pretty 
{ 
    "error" : { 
    "root_cause" : [ { 
     "type" : "illegal_argument_exception", 
     "reason" : "[allocate] allocation of [.kibana][0] on node {proc-gts-elk01}{dhLrHPqTR0y9IkU_kFS5Cw}{10.151.6.19}{10.151.6.19:9300}{max_local_storage_nodes=1, hostname=proc-gts-elk01, data=yes, master=yes} is not allowed, reason: [YES(below shard recovery limit of [2])][YES(node passes include/exclude/require filters)][YES(primary is already active)][YES(enough disk for shard on node, free: [81.4gb])][YES(shard not primary or relocation disabled)][YES(shard is not allocated to same node or host)][YES(allocation disabling is ignored)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(node meets awareness requirements)][YES(allocation disabling is ignored)][NO(target node version [2.4.0] is older than source node version [2.4.1])]" 
    } ], 
    "type" : "illegal_argument_exception", 
    "reason" : "[allocate] allocation of [.kibana][0] on node {proc-gts-elk01}{dhLrHPqTR0y9IkU_kFS5Cw}{10.151.6.19}{10.151.6.19:9300}{max_local_storage_nodes=1, hostname=proc-gts-elk01, data=yes, master=yes} is not allowed, reason: [YES(below shard recovery limit of [2])][YES(node passes include/exclude/require filters)][YES(primary is already active)][YES(enough disk for shard on node, free: [81.4gb])][YES(shard not primary or relocation disabled)][YES(shard is not allocated to same node or host)][YES(allocation disabling is ignored)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(node meets awareness requirements)][YES(allocation disabling is ignored)][NO(target node version [2.4.0] is older than source node version [2.4.1])]" 
    }, 
    "status" : 400 
}