2017-06-12 99 views
3

我試圖運行kubernetes微服務應用程序。我在kubernetes上運行rabbitmq,elasticsearch和eureka發現服務。除此之外,我有三個微服務應用程序。當我運行其中的兩個時,這很好;然而,當我跑第三個時,他們都開始無故重複一遍又一遍地重新開始。碼頭微服務應用程序重新啓動一遍又一遍的kubernetes

我的一個配置文件:

apiVersion: v1 
kind: Service 
metadata: 
    name: hrm 
    labels: 
    app: suite 
spec: 
    type: NodePort 
    ports: 
    - port: 8086 
     nodePort: 30001 
    selector: 
    app: suite 
    tier: hrm-core 
--- 
apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
    name: hrm 
spec: 
    replicas: 1 
    template: 
    metadata: 
     labels: 
     app: suite 
     tier: hrm-core 
    spec: 
     containers: 
     - image: privaterepo/hrm-core 
     name: hrm 
     ports: 
     - containerPort: 8086 
     imagePullSecrets: 
     - name: regsecret 

從kubectl結果描述莢HRM:

State:  Running 
     Started:  Mon, 12 Jun 2017 12:08:28 +0300 
    Last State:  Terminated 
     Reason:  Error 
     Exit Code: 137 
     Started:  Mon, 01 Jan 0001 00:00:00 +0000 
     Finished:  Mon, 12 Jun 2017 12:07:05 +0300 
    Ready:  True 
    Restart Count: 5 
    18m  18m  1 kubelet, minikube    Warning  FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hrm" with CrashLoopBackOff: "Back-off 10s restarting failed container=hrm pod=hrm-3288407936-cwvgz_default(915fb55c-4f4a-11e7-9240-080027ccf1c3)" 

kubectl得到莢:

NAME      READY  STATUS RESTARTS AGE 
discserv-189146465-s599x 1/1  Running 0   2d 
esearch-3913228203-9sm72 1/1  Running 0   2d 
hrm-3288407936-cwvgz  1/1  Running 6   46m 
parabot-1262887100-6098j 1/1  Running 9   2d 
rabbitmq-279796448-9qls3 1/1  Running 0   2d 
suite-ui-1725964700-clvbd 1/1  Running 3   2d 

kubectl版本:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} 
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:43:50Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"} 

minikube版本:

minikube version: v0.18.0 

當我看着吊艙日誌,沒有錯誤。它似乎開始沒有任何問題。這裏可能是什麼問題?

編輯:kubectl的輸出獲取事件:

19m  19m   1   discserv-189146465-lk3sm Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Pulling     kubelet, minikube  pulling image "private repo" 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Created     kubelet, minikube  Created container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Started     kubelet, minikube  Started container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67 
19m  19m   1   esearch-3913228203-6l3t7 Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   esearch-3913228203-6l3t7 Pod   spec.containers{esearch} Normal Pulled     kubelet, minikube  Container image "elasticsearch:2.4" already present on machine 
19m  19m   1   esearch-3913228203-6l3t7 Pod   spec.containers{esearch} Normal Created     kubelet, minikube  Created container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60 
19m  19m   1   esearch-3913228203-6l3t7 Pod   spec.containers{esearch} Normal Started     kubelet, minikube  Started container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60 
18m  18m   1   hrm-3288407936-d2vhh  Pod          Normal Scheduled     default-scheduler  Successfully assigned hrm-3288407936-d2vhh to minikube 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Pulling     kubelet, minikube  pulling image "private repo" 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Created     kubelet, minikube  Created container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Started     kubelet, minikube  Started container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e 
18m  18m   1   hrm-3288407936    ReplicaSet        Normal SuccessfulCreate   replicaset-controller Created pod: hrm-3288407936-d2vhh 
18m  18m   1   hrm       Deployment        Normal ScalingReplicaSet   deployment-controller Scaled up replica set hrm-3288407936 to 1 
19m  19m   1   minikube     Node          Normal RegisteredNode   controllermanager  Node minikube event: Registered Node minikube in NodeController 
19m  19m   1   minikube     Node          Normal Starting     kubelet, minikube  Starting kubelet. 
19m  19m   1   minikube     Node          Warning ImageGCFailed    kubelet, minikube  unable to find data for container/
19m  19m   1   minikube     Node          Normal NodeAllocatableEnforced kubelet, minikube  Updated Node Allocatable limit across pods 
19m  19m   1   minikube     Node          Normal NodeHasSufficientDisk  kubelet, minikube  Node minikube status is now: NodeHasSufficientDisk 
19m  19m   1   minikube     Node          Normal NodeHasSufficientMemory kubelet, minikube  Node minikube status is now: NodeHasSufficientMemory 
19m  19m   1   minikube     Node          Normal NodeHasNoDiskPressure  kubelet, minikube  Node minikube status is now: NodeHasNoDiskPressure 
19m  19m   1   minikube     Node          Warning Rebooted     kubelet, minikube  Node minikube has been rebooted, boot id: f66e28f9-62b3-4066-9e18-33b152fa1300 
19m  19m   1   minikube     Node          Normal NodeNotReady    kubelet, minikube  Node minikube status is now: NodeNotReady 
19m  19m   1   minikube     Node          Normal Starting     kube-proxy, minikube Starting kube-proxy. 
19m  19m   1   minikube     Node          Normal NodeReady     kubelet, minikube  Node minikube status is now: NodeReady 
8m   8m   1   minikube     Node          Warning SystemOOM     kubelet, minikube  System OOM encountered 
18m  18m   1   parabot-1262887100-r84kf Pod          Normal Scheduled     default-scheduler  Successfully assigned parabot-1262887100-r84kf to minikube 
8m   18m   2   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Pulling     kubelet, minikube  pulling image "private repo" 
8m   18m   2   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
18m  18m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Created     kubelet, minikube  Created container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045 
18m  18m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Started     kubelet, minikube  Started container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045 
8m   8m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Created     kubelet, minikube  Created container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b 
8m   8m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Started     kubelet, minikube  Started container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b 
18m  18m   1   parabot-1262887100   ReplicaSet        Normal SuccessfulCreate   replicaset-controller Created pod: parabot-1262887100-r84kf 
18m  18m   1   parabot      Deployment        Normal ScalingReplicaSet   deployment-controller Scaled up replica set parabot-1262887100 to 1 
19m  19m   1   rabbitmq-279796448-pcqqh Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Pulling     kubelet, minikube  pulling image "rabbitmq" 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Pulled     kubelet, minikube  Successfully pulled image "rabbitmq" 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Created     kubelet, minikube  Created container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Started     kubelet, minikube  Started container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50 
19m  19m   1   suite-ui-1725964700-ssshn Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Pulling     kubelet, minikube  pulling image "private repo" 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Created     kubelet, minikube  Created container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Started     kubelet, minikube  Started container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a 
+0

只是一個樂觀的猜測,退出代碼137意味着終止信號9(減去128),因此節點上可能沒有足夠的內存。進程可能會被操作系統殺死。您是否有機會增加節點數量或減少其他服務的數量以查看它是否有幫助? – hurturk

+0

我在想同樣的事情,但是當我描述節點時,似乎有足夠的內存。它說: OutOfDisk \t \t假 MemoryPressure \t假 DiskPressure \t \t假 準備\t \t真 現在,我想有可能是一個關於發現服務問題。 –

+0

它起作用的順序是否重要?例如,它始終是'hrm'無法啓動,或者如果以另一種順序啓動它,它總是第三個?根據其他意見,這意味着資源問題。 我注意到服務器是1.6.0,因爲這是1.6.4版服務器的第一個1.6版嗎? –

回答

1

見kubectl獲取日誌是否有明顯的錯誤。在這種情況下,如懷疑,它看起來是資源不足問題(或有資源泄漏的服務)。 如果可能,請嘗試增加資源以查看是否有幫助。

+0

我會嘗試啓動更多內存的minikube。之後我會更新問題。謝謝! –

+0

嗨@AshishVyas這確實是一個記憶問題。他們現在似乎沒有任何問題。非常感謝你。 –

+0

很高興我能幫到你。 –

相關問題