1
我有一個正在播放的ES集羣。有一次,我將所有主分片和複製分片正確地分配給了5個節點中的4個,但是爲了讓第5個節點上的某些節點再次丟失我的副本分片。現在我的主要碎片只存在於3個節點上。集羣中未分配的碎片
我試圖讓這個問題的底部:
在努力,如強制分配:
{
"commands": [
{
"allocate": {
"index": "group7to11poc",
"shard": 7,
"node": "SPOCNODE1"
}
}
]
}
我獲得以下響應。我無法找到確切的問題!
explanations: [1]
0: {
command: "allocate"
parameters: {
index: "group7to11poc"
shard: 7
node: "SPOCNODE5"
allow_primary: true
}-
decisions: [11]
0: {
decider: "same_shard"
decision: "YES"
explanation: "shard is not allocated to same node or host"
}-
1: {
decider: "filter"
decision: "NO"
explanation: "node does not match index include filters [_id:"4rZYPBOGRMK4y9YG6p7E2w"]"
}-
2: {
decider: "replica_after_primary_active"
decision: "YES"
explanation: "primary is already active"
}-
3: {
decider: "throttling"
decision: "YES"
explanation: "below shard recovery limit of [2]"
}-
4: {
decider: "enable"
decision: "YES"
explanation: "allocation disabling is ignored"
}-
5: {
decider: "disable"
decision: "YES"
explanation: "allocation disabling is ignored"
}-
6: {
decider: "awareness"
decision: "YES"
explanation: "no allocation awareness enabled"
}-
7: {
decider: "shards_limit"
decision: "YES"
explanation: "total shard limit disabled: [-1] <= 0"
}-
8: {
decider: "node_version"
decision: "YES"
explanation: "target node version [1.3.2] is same or newer than source node version [1.3.2]"
}-
9: {
decider: "disk_threshold"
decision: "YES"
explanation: "disk usages unavailable"
}-
10: {
decider: "snapshot_in_progress"
decision: "YES"
explanation: "shard not primary or relocation disabled"
}-
感謝您的支持。剛剛解決了我的問題。 – David 2014-11-25 14:24:37
先生,當我嘗試強制分配分片時,我遇到了與上面相同的問題/錯誤。 我的問題是你如何弄清楚索引上的過濾器,因爲我對此並不瞭解。 其次,過濾器如何阻止碎片移動或分配,主要原因是什麼。 非常感謝。 – 2016-11-10 13:18:37