我一直在努力讓DNS插件在CentOS 7.2羣集上工作。我使用這裏的方向安裝了集羣:http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-servicesDNS Addon Kubernetes CentOS 7羣集
在這個配置中,master正在運行:etcd,kube-scheduler,kube-apiserver和kube-controller-manager。節點正在運行:docker,kubelet和kube-proxy以及flanneld。在這種配置下羣集工作正常。豆莢,服務都在工作。下一步是嘗試啓用DNS。
注意:此羣集未使用證書進行身份驗證。
對於如何做到這一點,有幾個「指南」,但它們似乎都不適用於這種類型的集羣。
首先你能幫我解決一些困惑。 dns addon容器在哪裏運行?
- 他們必須在主人身上運行嗎?
- 它們可以像羣集上的其他任何羣集一樣部署嗎?
這裏是我到目前爲止已經試過:
Kubernetes版本:香草從安裝yum。
# kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0" GitCommit:"a4463d9a1accc9c61ae90ce5d314e248f16b9f05", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"a4463d9a1accc9c61ae90ce5d314e248f16b9f05", GitTreeState:"clean"}
在下面我已經更換了模板變量與1個副本集的天空dns.yaml文件,設定dns_domain爲「cluster.local」。根據StackOverflow上的一些建議,我在「/ kube-dns」容器「--kube-master-url = http://10.2.1.245:8080」中增加了一個命令行。
SkyDNS-rc.yaml(指向KUBE-DNS的V18)
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v18
namespace: kube-system
labels:
k8s-app: kube-dns
version: v18
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
labels:
k8s-app: kube-dns
version: v18
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.6
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain=cluster.local
- --dns-port=10053
- --kube-master-url=http://10.2.1.245:8080
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.
在每個我已經更新了的/ etc/kubernetes/CONF文件中添加DSN的節點(主站和3個爪牙)的最後一節(完整文件發佈完整文件)。
如果我使用上面的複製控制器,是否需要添加這些?
的/ etc/kubernetes/conf目錄
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
# DNS Add-on
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.254.100.1"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
下面是部署KubeDNS時,我所看到的。
[[email protected] dcook]# kubectl create -f kube-fun/skydns-rc.yaml
replicationcontroller "kube-dns-v18" created
[[email protected] dcook]# kubectl get rc kube-dns-v18 --namespace kube-system
NAME DESIRED CURRENT AGE
kube-dns-v18 1 1 34s
[[email protected] dcook]# kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-v18-cx4ir 3/3 Running 0 46s
日誌:
[[email protected] dcook]# kubectl logs --namespace="kube-system" kube-dns-v18-cx4ir kubedns
I0726 20:17:52.675064 1 server.go:91] Using http://10.2.1.245:8080 for kubernetes master
I0726 20:17:52.676138 1 server.go:92] Using kubernetes API v1
I0726 20:17:52.676498 1 server.go:132] Starting SkyDNS server. Listening on port:10053
I0726 20:17:52.676815 1 server.go:139] skydns: metrics enabled on :/metrics
I0726 20:17:52.676836 1 dns.go:166] Waiting for service: default/kubernetes
I0726 20:17:52.677584 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0726 20:17:52.677604 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0726 20:17:52.867455 1 server.go:101] Setting up Healthz Handler(/readiness, /cache) on port :8081
I0726 20:17:52.867843 1 dns.go:660] DNS Record:&{10.254.0.1 0 10 10 false 30 0 }, hash:63b49cf0
I0726 20:17:52.867898 1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10 false 30 0 }, hash:c3f6ae26
I0726 20:17:52.868048 1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:b9b7d845
I0726 20:17:52.868103 1 dns.go:660] DNS Record:&{10.254.91.7 0 10 10 false 30 0 }, hash:9b59fd9c
I0726 20:17:52.868137 1 dns.go:660] DNS Record:&{my-nginx.default.svc.cluster.local. 0 10 10 false 30 0 }, hash:b0f41a92
[[email protected] dcook]# kubectl logs --namespace="kube-system" kube-dns-v18-cx4ir healthz
2016/07/26 20:17:11 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:10.667247682 +0000 UTC, error exit status 1
2016/07/26 20:17:21 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:20.667213321 +0000 UTC, error exit status 1
2016/07/26 20:17:31 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:30.667225804 +0000 UTC, error exit status 1
2016/07/26 20:17:41 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:40.667218056 +0000 UTC, error exit status 1
2016/07/26 20:17:51 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:50.667724036 +0000 UTC, error exit status 1