0
我已經部署了主,從http://kubernetes.io/docs/getting-started-guides/docker-multinode/Kubernetes:以期達到服務羣集IP地址
使用說明兩個私人的OpenStack雲實例一個工作節點問題的問題是,當我發送一個請求到服務,該服務只能成功轉發到本地節點上部署的Pod。但是,遠程窗格可以通過各自的集羣IP訪問。
例如,mongo服務由部署在worker節點上的一個pod組成。
[email protected]:~$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 18h
mongo 10.0.0.208 nodes 27017/TCP 17h
nginx 10.0.0.85 <none> 80/TCP 17h
。可以如下圖所示:莢IP可達,但莢無法通過該服務IP
[email protected]:~$ kubectl describe pods -l name=mongo | grep IP
IP: 10.1.78.2
[email protected]:~$ curl 10.1.78.2:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.
[email protected]:~$ curl 10.0.0.208:27017 --verbose
* Rebuilt URL to: 10.0.0.208:27017/
* Hostname was NOT found in DNS cache
* Trying 10.0.0.208...
我試圖通過啓動KUBE-代理與解決此問題可到達「代理模式= iptables的」選項,但這沒有任何意義,因爲該服務IP不匹配,從路由表中的子網掩碼:
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default host-172-17-13- 0.0.0.0 UG 0 0 0 eth0
10.1.0.0 * 255.255.0.0 U 0 0 0 flannel.1
10.1.58.0 * 255.255.255.0 U 0 0 0 docker0
169.254.169.254 host-172-17-13- 255.255.255.255 UGH 0 0 0 eth0
172.17.13.0 * 255.255.255.0 U 0 0 0 eth0
我也開始主,工人用下面的運算蒸發散置
K8S_VERSION is set to: 1.2.3
ETCD_VERSION is set to: 2.2.1
FLANNEL_VERSION is set to: 0.5.5
FLANNEL_IFACE is set to: eth0
FLANNEL_IPMASQ is set to: false
日誌從KUBE-代理如下:
Flag --resource-container has been deprecated, This feature will be removed in a later release.
I0511 15:21:07.898497 1 iptables.go:177] Could not connect to D-Bus system bus: dial unix /var/run/dbus/system_bus_socket: connect: no such file or directory
I0511 15:21:07.901068 1 server.go:163] Running in resource-only container "\"\""
E0511 15:21:07.905001 1 server.go:341] Can't get Node "kube-master", assuming iptables proxy, err: Get http://127.0.0.1:8080/api/v1/nodes/kube-master: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I0511 15:21:07.907082 1 server.go:201] Using iptables Proxier.
I0511 15:21:07.907168 1 proxier.go:208] missing br-netfilter module or unset br-nf-call-iptables; proxy may not work as intended
I0511 15:21:07.907207 1 server.go:214] Tearing down userspace rules.
I0511 15:21:07.928371 1 conntrack.go:36] Setting nf_conntrack_max to 262144
I0511 15:21:07.928436 1 conntrack.go:41] Setting conntrack hashsize to 65536
E0511 15:21:07.928693 1 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0511 15:21:07.928754 1 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I0511 15:21:07.933435 1 conntrack.go:46] Setting nf_conntrack_tcp_timeout_established to 86400
E0511 15:21:07.934083 1 event.go:207] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
E0511 15:21:08.929506 1 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0511 15:21:08.929517 1 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0511 15:21:09.930126 1 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0511 15:21:09.930421 1 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0511 15:21:10.930876 1 reflector.go:205] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
E0511 15:21:10.931430 1 reflector.go:205] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
I0511 15:21:11.937568 1 proxier.go:501] Setting endpoints for "default/kubernetes:https" to [172.17.13.43:6443]
I0511 15:21:11.938752 1 proxier.go:501] Setting endpoints for "default/mongo:" to [10.1.78.2:27017]
I0511 15:21:11.939017 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.2:80 10.1.58.3:80 10.1.78.3:80]
I0511 15:21:11.939396 1 proxier.go:646] Not syncing iptables until Services and Endpoints have been received from master
I0511 15:21:11.940217 1 proxier.go:426] Adding new service "default/nginx:" at 10.0.0.85:80/TCP
I0511 15:21:11.940376 1 proxier.go:426] Adding new service "default/kubernetes:https" at 10.0.0.1:443/TCP
I0511 15:21:11.940433 1 proxier.go:426] Adding new service "default/mongo:" at 10.0.0.208:27017/TCP
I0511 15:21:11.956264 1 proxier.go:1197] Opened local port "nodePort for default/mongo:" (:31278/tcp)
I0511 15:21:12.773826 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.78.3:80]
I0511 15:21:19.590803 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.2:80 10.1.78.3:80]
I0511 15:21:21.798255 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.2:80 10.1.58.4:80 10.1.78.3:80]
I0511 15:21:21.833021 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.4:80 10.1.78.3:80]
I0511 15:21:25.457793 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.4:80 10.1.58.5:80 10.1.78.3:80]
I0511 15:21:25.515718 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.4:80 10.1.58.5:80]
I0511 15:21:27.410673 1 proxier.go:501] Setting endpoints for "default/nginx:" to [10.1.58.4:80 10.1.58.5:80 10.1.78.4:80]
可能是什麼問題呢?
Eddy
P.S.
iptables的保存命令顯示以下輸出
[email protected]:/home/ubuntu# iptables-save
# Generated by iptables-save v1.4.21 on Wed May 11 17:06:57 2016
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-55NY7Y2VS7MU5SHC - [0:0]
:KUBE-SEP-INQ4JU67KGX5TI3V - [0:0]
:KUBE-SEP-J3MBDOP5WNYLP73O - [0:0]
:KUBE-SEP-MMF6BX4SIRXFC7EI - [0:0]
:KUBE-SEP-THKIU3KIDKH63VHE - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-4N57TFCL4MD7ZTDA - [0:0]
:KUBE-SVC-G2OJTDIWIJ7HQ7MY - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 10.1.58.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.1.78.0/24 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongo:" -m tcp --dport 31278 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongo:" -m tcp --dport 31278 -j KUBE-SVC-G2OJTDIWIJ7HQ7MY
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-55NY7Y2VS7MU5SHC -s 10.1.58.4/32 -m comment --comment "default/nginx:" -j KUBE-MARK-MASQ
-A KUBE-SEP-55NY7Y2VS7MU5SHC -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 10.1.58.4:80
-A KUBE-SEP-INQ4JU67KGX5TI3V -s 10.1.78.4/32 -m comment --comment "default/nginx:" -j KUBE-MARK-MASQ
-A KUBE-SEP-INQ4JU67KGX5TI3V -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 10.1.78.4:80
-A KUBE-SEP-J3MBDOP5WNYLP73O -s 10.1.78.2/32 -m comment --comment "default/mongo:" -j KUBE-MARK-MASQ
-A KUBE-SEP-J3MBDOP5WNYLP73O -p tcp -m comment --comment "default/mongo:" -m tcp -j DNAT --to-destination 10.1.78.2:27017
-A KUBE-SEP-MMF6BX4SIRXFC7EI -s 10.1.58.5/32 -m comment --comment "default/nginx:" -j KUBE-MARK-MASQ
-A KUBE-SEP-MMF6BX4SIRXFC7EI -p tcp -m comment --comment "default/nginx:" -m tcp -j DNAT --to-destination 10.1.58.5:80
-A KUBE-SEP-THKIU3KIDKH63VHE -s 172.17.13.43/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-THKIU3KIDKH63VHE -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 172.17.13.43:6443
-A KUBE-SERVICES -d 10.0.0.85/32 -p tcp -m comment --comment "default/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-4N57TFCL4MD7ZTDA
-A KUBE-SERVICES -d 10.0.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.0.0.208/32 -p tcp -m comment --comment "default/mongo: cluster IP" -m tcp --dport 27017 -j KUBE-SVC-G2OJTDIWIJ7HQ7MY
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-55NY7Y2VS7MU5SHC
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-MMF6BX4SIRXFC7EI
-A KUBE-SVC-4N57TFCL4MD7ZTDA -m comment --comment "default/nginx:" -j KUBE-SEP-INQ4JU67KGX5TI3V
-A KUBE-SVC-G2OJTDIWIJ7HQ7MY -m comment --comment "default/mongo:" -j KUBE-SEP-J3MBDOP5WNYLP73O
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-THKIU3KIDKH63VHE
COMMIT
# Completed on Wed May 11 17:06:57 2016
# Generated by iptables-save v1.4.21 on Wed May 11 17:06:57 2016
*filter
:INPUT ACCEPT [419:187638]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [403:197496]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-SERVICES - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-ISOLATION -j RETURN
COMMIT
嗨 - 你有沒有找到這個問題的解決方案? – pagid
使用更高版本的docker-multi-node和默認環境變量時,問題消失。 –
我可以看到你解決了這個問題。考慮發佈自己的答案,這將有助於其他用戶查看此線程。 – Marilu