2017-02-15 33 views
0

我玩弄kubernetes和我建立了我的環境,4個部署:負載均衡器服務將不會重定向到所需莢

  • hello:基本的「Hello World」的服務
  • auth:提供認證和加密
  • 前端:其表示單點的從外部進入並路由到準確的豆莢nginx的反向代理內部
  • nodehello:基本的「hello world」的服務,寫在節點JS(這是我貢獻)

對於helloauthnodehello部署我已經設置了每一個內部服務。

對於frontend部署,我建立了一個負載平衡器服務,它將暴露給外部世界。它使用一個配置地圖nginx-frontend-conf重定向到相應的吊艙和具有以下內容:

upstream hello { 
    server hello.default.svc.cluster.local; 
} 
upstream auth { 
    server auth.default.svc.cluster.local; 
} 
upstream nodehello { 
    server nodehello.default.svc.cluster.local; 
}   
server { 
    listen 443; 
    ssl on; 
    ssl_certificate  /etc/tls/cert.pem; 
    ssl_certificate_key /etc/tls/key.pem; 
    location/{ 
     proxy_pass http://hello; 
    } 
    location /login { 
     proxy_pass http://auth; 
    } 
    location /nodehello { 
     proxy_pass http://nodehello; 
    } 
} 

當調用使用curl -k https://<frontend-external-ip>我被路由到一個可用hello箱,後者是預期的行爲前端端點。 當致電https://<frontend-external-ip>/nodehello但我不會路由到nodehello吊艙,而是再次轉到hellopod

我懷疑upstream nodehello配置是失敗的部分。我不確定服務發現如何在這裏工作,即dns名稱如何顯示nodehello.default.svc.cluster.local。我會很感激解釋它是如何工作的以及我做錯了什麼。

YAML文件中使用

部署/ hello.yaml

apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
    name: hello 
spec: 
    replicas: 3 
    template: 
    metadata: 
     labels: 
     app: hello 
     track: stable 
    spec: 
     containers: 
     - name: hello 
      image: "udacity/example-hello:1.0.0" 
      ports: 
      - name: http 
       containerPort: 80 
      - name: health 
       containerPort: 81 
      resources: 
      limits: 
       cpu: 0.2 
       memory: "10Mi" 
      livenessProbe: 
      httpGet: 
       path: /healthz 
       port: 81 
       scheme: HTTP 
      initialDelaySeconds: 5 
      periodSeconds: 15 
      timeoutSeconds: 5 
      readinessProbe: 
      httpGet: 
       path: /readiness 
       port: 81 
       scheme: HTTP 
      initialDelaySeconds: 5 
      timeoutSeconds: 1 

部署/ auth.yaml

apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
    name: auth 
spec: 
    replicas: 1 
    template: 
    metadata: 
     labels: 
     app: auth 
     track: stable 
    spec: 
     containers: 
     - name: auth 
      image: "udacity/example-auth:1.0.0" 
      ports: 
      - name: http 
       containerPort: 80 
      - name: health 
       containerPort: 81 
      resources: 
      limits: 
       cpu: 0.2 
       memory: "10Mi" 
      livenessProbe: 
      httpGet: 
       path: /healthz 
       port: 81 
       scheme: HTTP 
      initialDelaySeconds: 5 
      periodSeconds: 15 
      timeoutSeconds: 5 
      readinessProbe: 
      httpGet: 
       path: /readiness 
       port: 81 
       scheme: HTTP 
      initialDelaySeconds: 5 
      timeoutSeconds: 1 

部署/ frontend.yaml

apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
    name: frontend 
spec: 
    replicas: 1 
    template: 
    metadata: 
     labels: 
     app: frontend 
     track: stable 
    spec: 
     containers: 
     - name: nginx 
      image: "nginx:1.9.14" 
      lifecycle: 
      preStop: 
       exec: 
       command: ["/usr/sbin/nginx","-s","quit"] 
      volumeMounts: 
      - name: "nginx-frontend-conf" 
       mountPath: "/etc/nginx/conf.d" 
      - name: "tls-certs" 
       mountPath: "/etc/tls" 
     volumes: 
     - name: "tls-certs" 
      secret: 
      secretName: "tls-certs" 
     - name: "nginx-frontend-conf" 
      configMap: 
      name: "nginx-frontend-conf" 
      items: 
       - key: "frontend.conf" 
       path: "frontend.conf" 

部署/ nodehello.yaml

apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
    name: nodehello 
spec: 
    replicas: 1 
    template: 
    metadata: 
     labels: 
     app: nodehello 
     track: stable 
    spec: 
     containers: 
     - name: nodehello 
      image: "thezebra/nodehello:0.0.2" 
      ports: 
      - name: http 
       containerPort: 80 
      resources: 
      limits: 
       cpu: 0.2 
       memory: "10Mi" 

服務/ hello.yaml

kind: Service 
apiVersion: v1 
metadata: 
    name: "hello" 
spec: 
    selector: 
    app: "hello" 
    ports: 
    - protocol: "TCP" 
     port: 80 
     targetPort: 80 

服務/ auth.yaml

kind: Service 
apiVersion: v1 
metadata: 
    name: "auth" 
spec: 
    selector: 
    app: "auth" 
    ports: 
    - protocol: "TCP" 
     port: 80 
     targetPort: 80 

服務/前端。YAML

kind: Service 
apiVersion: v1 
metadata: 
    name: "frontend" 
spec: 
    selector: 
    app: "frontend" 
    ports: 
    - protocol: "TCP" 
     port: 443 
     targetPort: 443 
    type: LoadBalancer 

服務/ nodehello.yaml

kind: Service 
apiVersion: v1 
metadata: 
    name: "nodehello" 
spec: 
    selector: 
    app: "nodehello" 
    ports: 
    - protocol: "TCP" 
     port: 80 
     targetPort: 80 
+1

請提供使用的yaml文件。 –

+0

@FarhadFarahi yaml文件添加 – Ronin

回答

0

這工作完全:-)

$ curl -s http://frontend/ 
{"message":"Hello"} 
$ curl -s http://frontend/login 
authorization failed 
$ curl -s http://frontend/nodehello 
Hello World! 

我懷疑你可能已經更新nginx的,前端-conf的當你添加/ nodehello但沒有重新啓動nginx時。豆莢不會自動拾取更改的配置圖。嘗試:

kubectl delete pod -l app=frontend 

直到versioned ConfigMaps發生沒有一個更好的解決方案。