我正在爲CentOS7 env中的Kubernetes主節點實施HA解決方案。Kubernetes - 在CentOS7中實現Kubernetes Master HA解決方案
我ENV看起來像:
K8S_Master1 : 172.16.16.5
K8S_Master2 : 172.16.16.51
HAProxy : 172.16.16.100
K8S_Minion1 : 172.16.16.50
etcd Version: 3.1.7
Kubernetes v1.5.2
CentOS Linux release 7.3.1611 (Core)
我ETCD集羣是正確安裝並處於工作狀態。
[[email protected] ~]# etcdctl cluster-health
member 282a4a2998aa4eb0 is healthy: got healthy result from http://172.16.16.51:2379
member dd3979c28abe306f is healthy: got healthy result from http://172.16.16.5:2379
member df7b762ad1c40191 is healthy: got healthy result from http://172.16.16.50:2379
我的K8S爲Master1配置爲:
[[email protected] ~]# cat /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
[[email protected] ~]# cat /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"
[[email protected] ~]# cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect"
[[email protected] ~]# cat /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--leader-elect"
至於Master2,我已將其配置爲:
[[email protected] kubernetes]# cat apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.100.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
[[email protected] kubernetes]# cat config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"
[[email protected] kubernetes]# cat scheduler
KUBE_SCHEDULER_ARGS=""
[[email protected] kubernetes]# cat controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=""
注意--leader-elect
僅在Master1配置我想Master1是領導者。
我HA代理的配置很簡單:
frontend K8S-Master
bind 172.16.16.100:8080
default_backend K8S-Master-Nodes
backend K8S-Master-Nodes
mode http
balance roundrobin
server master1 172.16.16.5:8080 check
server master2 172.16.16.51:8080 check
現在我已下令奴才連接到負載平衡器的IP,而不是直接到主IP。
配置上爪牙是:
[[email protected] kubernetes]# cat /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://172.16.16.100:8080"
在兩個主節點,我看到僕從/節點狀態爲Ready
[[email protected] ~]# kubectl get nodes
NAME STATUS AGE
172.16.16.50 Ready 2h
[[email protected] ~]# kubectl get nodes
NAME STATUS AGE
172.16.16.50 Ready 2h
我設置使用例如nginx的莢:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
我使用創建了複製控制器:
[[email protected] ~]# kubectl create -f nginx.yaml
並且在兩個主節點上,我都能看到創建的吊艙。
[[email protected] ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-jwpxd 1/1 Running 0 29m
nginx-q613j 1/1 Running 0 29m
[[email protected] ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-jwpxd 1/1 Running 0 29m
nginx-q613j 1/1 Running 0 29m
現在邏輯思維,如果我要取下來Master1
節點和刪除Master2
豆莢,Master2
應該重新豆莢。所以這就是我所做的。
在Master1
:
[[email protected] ~]# systemctl stop kube-scheduler ; systemctl stop kube-apiserver ; systemctl stop kube-controller-manager
在Master2
:
[[email protected] kubernetes]# kubectl delete po --all
pod "nginx-l7mvc" deleted
pod "nginx-r3m58" deleted
現在Master2
應該創建莢自複製控制器仍上漲。但新的豆莢卡在:
[[email protected] kubernetes]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-l7mvc 1/1 Terminating 0 13m
nginx-qv6z9 0/1 Pending 0 13m
nginx-r3m58 1/1 Terminating 0 13m
nginx-rplcz 0/1 Pending 0 13m
我等了很長時間,但豆莢卡在這種狀態。
但是,當我重新啓動Master1
服務:
[[email protected] ~]# systemctl start kube-scheduler ; systemctl start kube-apiserver ; systemctl start kube-controller-manager
然後我看到Master1
進展:
NAME READY STATUS RESTARTS AGE
nginx-qv6z9 0/1 ContainerCreating 0 14m
nginx-rplcz 0/1 ContainerCreating 0 14m
[[email protected] kubernetes]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-qv6z9 1/1 Running 0 15m
nginx-rplcz 1/1 Running 0 15m
爲什麼犯規Master2
重建莢?這是我想弄明白的混亂。我有很長的路要設置一個全功能的高可用性設置,但似乎只有在我可以弄清楚這個難題的時候。