2016-08-03 81 views
0

我已經在CoreOS上基於contrib repo創建了Kubernetes v1.3.3羣集。我的羣集看起來很健康,而且我想使用儀表板,但即使所有身份驗證都被禁用,我也無法訪問UI。以下是kubernetes-dashboard組件的詳細信息,以及一些API服務器配置/輸出。我在這裏錯過了什麼?無法訪問Kubernetes儀表板

控制板組件

[email protected] ~ $ kubectl get ep kubernetes-dashboard --namespace=kube-system -o yaml 
apiVersion: v1 
kind: Endpoints 
metadata: 
    creationTimestamp: 2016-07-28T23:40:57Z 
    labels: 
    k8s-app: kubernetes-dashboard 
    kubernetes.io/cluster-service: "true" 
    name: kubernetes-dashboard 
    namespace: kube-system 
    resourceVersion: "345970" 
    selfLink: /api/v1/namespaces/kube-system/endpoints/kubernetes-dashboard 
    uid: bb49360f-551c-11e6-be8c-02b43b6aa639 
subsets: 
- addresses: 
    - ip: 172.16.100.9 
    targetRef: 
     kind: Pod 
     name: kubernetes-dashboard-v1.1.0-nog8g 
     namespace: kube-system 
     resourceVersion: "345969" 
     uid: d4791722-5908-11e6-9697-02b43b6aa639 
    ports: 
    - port: 9090 
    protocol: TCP 

[email protected] ~ $ kubectl get svc kubernetes-dashboard --namespace=kube-system -o yaml 
apiVersion: v1 
kind: Service 
metadata: 
    creationTimestamp: 2016-07-28T23:40:57Z 
    labels: 
    k8s-app: kubernetes-dashboard 
    kubernetes.io/cluster-service: "true" 
    name: kubernetes-dashboard 
    namespace: kube-system 
    resourceVersion: "109199" 
    selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard 
    uid: bb4804bd-551c-11e6-be8c-02b43b6aa639 
spec: 
    clusterIP: 172.20.164.194 
    ports: 
    - port: 80 
    protocol: TCP 
    targetPort: 9090 
    selector: 
    k8s-app: kubernetes-dashboard 
    sessionAffinity: None 
    type: ClusterIP 
status: 
    loadBalancer: {} 
[email protected] ~ $ kubectl describe svc/kubernetes-dashboard -- 

namespace=kube-system 
Name:   kubernetes-dashboard 
Namespace:  kube-system 
Labels:   k8s-app=kubernetes-dashboard 
      kubernetes.io/cluster-service=true 
Selector:  k8s-app=kubernetes-dashboard 
Type:   ClusterIP 
IP:   172.20.164.194 
Port:   <unset> 80/TCP 
Endpoints:  172.16.100.9:9090 
Session Affinity: None 
No events. 

[email protected] ~ $ kubectl get po kubernetes-dashboard-v1.1.0-nog8g --namespace=kube-system -o yaml 
apiVersion: v1 
kind: Pod 
metadata: 
    annotations: 
    kubernetes.io/created-by: | 
     {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kubernetes-dashboard-v1.1.0","uid":"3a282a06-58c9-11e6-9ce6-02b43b6aa639","apiVersion":"v1","resourceVersion":"338823"}} 
    creationTimestamp: 2016-08-02T23:28:34Z 
    generateName: kubernetes-dashboard-v1.1.0- 
    labels: 
    k8s-app: kubernetes-dashboard 
    kubernetes.io/cluster-service: "true" 
    version: v1.1.0 
    name: kubernetes-dashboard-v1.1.0-nog8g 
    namespace: kube-system 
    resourceVersion: "345969" 
    selfLink: /api/v1/namespaces/kube-system/pods/kubernetes-dashboard-v1.1.0-nog8g 
    uid: d4791722-5908-11e6-9697-02b43b6aa639 
spec: 
    containers: 
    - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0 
    imagePullPolicy: IfNotPresent 
    livenessProbe: 
     failureThreshold: 3 
     httpGet: 
     path:/
     port: 9090 
     scheme: HTTP 
     initialDelaySeconds: 30 
     periodSeconds: 10 
     successThreshold: 1 
     timeoutSeconds: 30 
    name: kubernetes-dashboard 
    ports: 
    - containerPort: 9090 
     protocol: TCP 
    resources: 
     limits: 
     cpu: 100m 
     memory: 50Mi 
     requests: 
     cpu: 100m 
     memory: 50Mi 
    terminationMessagePath: /dev/termination-log 
    volumeMounts: 
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount 
     name: default-token-lvmnw 
     readOnly: true 
    dnsPolicy: ClusterFirst 
    nodeName: ip-10-178-153-57.us-west-2.compute.internal 
    restartPolicy: Always 
    securityContext: {} 
    serviceAccount: default 
    serviceAccountName: default 
    terminationGracePeriodSeconds: 30 
    volumes: 
    - name: default-token-lvmnw 
    secret: 
     secretName: default-token-lvmnw 
status: 
    conditions: 
    - lastProbeTime: null 
    lastTransitionTime: 2016-08-02T23:28:34Z 
    status: "True" 
    type: Initialized 
    - lastProbeTime: null 
    lastTransitionTime: 2016-08-02T23:28:35Z 
    status: "True" 
    type: Ready 
    - lastProbeTime: null 
    lastTransitionTime: 2016-08-02T23:28:34Z 
    status: "True" 
    type: PodScheduled 
    containerStatuses: 
    - containerID: docker://1bf65bbec830e32e85e1cd9e22a5db7a2b623c6d9d7da17c747d256a9838676f 
    image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0 
    imageID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53 
    lastState: {} 
    name: kubernetes-dashboard 
    ready: true 
    restartCount: 0 
    state: 
     running: 
     startedAt: 2016-08-02T23:28:34Z 
    hostIP: 10.178.153.57 
    phase: Running 
    podIP: 172.16.100.9 
    startTime: 2016-08-02T23:28:34Z 

API服務器配置

/opt/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://internal-etcd-elb-236896596.us-west-2.elb.amazonaws.com:80 --insecure-bind-address=0.0.0.0 --secure-port=443 --allow-privileged=true --service-cluster-ip-range=172.20.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ServiceAccount,ResourceQuota --bind-address=0.0.0.0 --cloud-provider=aws 

API服務器是從遠程主機(筆記本)

$ curl http://10.178.153.240:8080/ 
{ 
    "paths": [ 
    "/api", 
    "/api/v1", 
    "/apis", 
    "/apis/apps", 
    "/apis/apps/v1alpha1", 
    "/apis/autoscaling", 
    "/apis/autoscaling/v1", 
    "/apis/batch", 
    "/apis/batch/v1", 
    "/apis/batch/v2alpha1", 
    "/apis/extensions", 
    "/apis/extensions/v1beta1", 
    "/apis/policy", 
    "/apis/policy/v1alpha1", 
    "/apis/rbac.authorization.k8s.io", 
    "/apis/rbac.authorization.k8s.io/v1alpha1", 
    "/healthz", 
    "/healthz/ping", 
    "/logs/", 
    "/metrics", 
    "/swaggerapi/", 
    "/ui/", 
    "/version" 
    ] 
訪問

UI是不能遠程訪問

$ curl -L http://10.178.153.240:8080/ui 
Error: 'dial tcp 172.16.100.9:9090: i/o timeout' 
Trying to reach: 'http://172.16.100.9:9090/' 

UI是從爪牙節點訪問

[email protected] ~$ curl -L 172.16.100.9:9090 
<!doctype html> <html ng-app="kubernetesDashboard">... 

API服務器路由表

[email protected] ~ $ ip route show 
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.240 metric 1024 
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.240 
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.240 metric 1024 
172.16.0.0/12 dev flannel.1 proto kernel scope link src 172.16.6.0 
172.16.6.0/24 dev docker0 proto kernel scope link src 172.16.6.1 

爪牙(POD地方住)路由表

[email protected] ~ $ ip route show 
default via 10.178.153.1 dev eth0 proto dhcp src 10.178.153.57 metric 1024 
10.178.153.0/24 dev eth0 proto kernel scope link src 10.178.153.57 
10.178.153.1 dev eth0 proto dhcp scope link src 10.178.153.57 metric 1024 
172.16.0.0/12 dev flannel.1 
172.16.100.0/24 dev docker0 proto kernel scope link src 172.16.100.1 

法蘭絨日誌 看來,這一條路線與法蘭絨行爲不端。我在日誌中看到這些錯誤,但重新啓動守護進程似乎無法解決它。

...Watch subnets: client: etcd cluster is unavailable or misconfigured 

... L3 miss: 172.16.100.9 

... calling NeighSet: 172.16.100.9 
+0

這可能是一個問題,未定義或創建的服務? 你可以嘗試粘貼kubectl describe svc的輸出嗎? –

+0

這絕對是@SantanuDey。我向OP添加了描述調用,並且它正在到達表示頁面的172.16.100.9端點。 – smugcloud

+0

如果您從服務的:9090獲得響應,這意味着它是正確的。 您可能需要定義一個節點端口類型的附加服務,以便能夠使用節點IP或從外部訪問它。 –

回答

1

對於任何人誰找到自己的方式對這個問題,我想後最終解決,因爲它不是一個絨布,Kubernetes,或SkyDNS問題,這是一個意外的防火牆。只要我在API服務器上打開防火牆,我的Flannel路由功能完全正常,我可以訪問儀表板(假設在API服務器上啓用了基本身份驗證)。

所以最後,用戶錯誤:)

+0

Flannel需要什麼流程才能正常運行?謝謝! –

+0

我的TCP流量暴露在防火牆中,但沒有UDP。打開限制解決了它。 – smugcloud

0

如果您嘗試添加其他服務像下面的定義,那麼你我想你應該能夠訪問使用任何節點的IP和它在這個例子中,nodeport的儀表板30100

kind: Service 
apiVersion: v1 
metadata: 
    name: kube-expose-dashboard 
    namespace: kube-system 
    labels: 
    k8s-app: kubernetes-dashboard 
spec: 
    type: NodePort 
    ports: 
    - port: 80 
     protocol: TCP 
     nodePort: 30100 
     targetPort: 9090 
    selector: 
    app: kubernetes-dashboard 
+2

正確。沒有必要爲此創建另一個服務,但可以通過簡單地修補現有服務來添加NodePort,例如使用'kubectl edit svc kubernetes-dashboard' –

1

要麼你不得不暴露使用類型NodePort的服務,因爲在前面的回答中提到的集羣之外的服務,或者如果您啓用了基本驗證您的API服務器上,你可以使用下面的URL到達您的服務:

http://kubernetes_master_address/api/v1/proxy/namespaces/namespace_name/services/service_name

參見:http://kubernetes.io/docs/user-guide/accessing-the-cluster/#manually-constructing-apiserver-proxy-urls

+0

謝謝Antoine。我已經添加了一個基本的auth文件,重新啓動了api-server,並且我仍然看到'錯誤:'撥號tcp 172.16.100.9:9090:I/O超時' 嘗試達到:'http://172.16。 100.9:9090 /''問題。如果我嘗試用base64用戶捲曲:pw,則會導致未經授權。我確實嘗試了NodePort,並且無法從外部訪問底層容器。代理中是否存在配置錯誤? @ antoine-cotten – smugcloud

+0

這是因爲您試圖訪問您的服務IP,而這在工作站所在的網絡中很可能無法路由!嘗試使用我發佈的URL(替換'namespace_name'和'service_name'),並確保您使用的是*主* IP /地址 –

+0

是的,這就是我正在做的。以下是我嘗試點擊的完整網址:https://10.178.153.240/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/ – smugcloud