2017-06-14 18 views
0

嗨,我已經配置了一個集羣具有兩個節點(雙VM進入VirtualBox的),集羣正常啓動,但廣告標誌似乎是由領事領事泊塢窗 - 廣告標誌忽略

  • VM1(APP)IP 192.168被忽略。 20.10
  • VM2(網絡)的IP 192.168.20.11

搬運工-撰寫VM1(APP)

version: '2' 
services: 
    appconsul: 
     build: consul/ 
     ports: 
      - 192.168.20.10:8300:8300 
      - 192.168.20.10:8301:8301 
      - 192.168.20.10:8301:8301/udp 
      - 192.168.20.10:8302:8302 
      - 192.168.20.10:8302:8302/udp 
      - 192.168.20.10:8400:8400 
      - 192.168.20.10:8500:8500 
      - 172.32.0.1:53:53/udp 
     hostname: node_1 
     command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui 
     networks: 
      net-app: 

    appregistrator: 
     build: registrator/ 
     hostname: app 
     command: consul://192.168.20.10:8500 
     volumes: 
      - /var/run/docker.sock:/tmp/docker.sock 
     depends_on: 
      - appconsul 
     networks: 
      net-app: 
networks: 
    net-app: 
     driver: bridge 
     ipam: 
      config: 
       - subnet: 172.32.0.0/24 

泊塢窗 - 撰寫VM2(網絡)

version: '2' 
services: 
    webconsul: 
     build: consul/ 
     ports: 
      - 192.168.20.11:8300:8300 
      - 192.168.20.11:8301:8301 
      - 192.168.20.11:8301:8301/udp 
      - 192.168.20.11:8302:8302 
      - 192.168.20.11:8302:8302/udp 
      - 192.168.20.11:8400:8400 
      - 192.168.20.11:8500:8500 
      - 172.33.0.1:53:53/udp 
     hostname: node_2 
     command: -server -advertise 192.168.20.11 -join 192.168.20.10 
     networks: 
      net-web: 

    webregistrator: 
     build: registrator/ 
     hostname: web 
     command: consul://192.168.20.11:8500 
     volumes: 
      - /var/run/docker.sock:/tmp/docker.sock 
     depends_on: 
      - webconsul 
     networks: 
      net-web: 
networks: 
    net-web: 
     driver: bridge 
     ipam: 
      config: 
       - subnet: 172.33.0.0/24 

開始後我沒有錯誤有關廣告標誌,但該服務已與內部網的私有IP,而不是IP宣佈做廣告(192.168.20.10和192.168.20.11)註冊,任何想法?

連接登錄node_1的,但它們是相同的node_2

appconsul_1  | ==> WARNING: Expect Mode enabled, expecting 2 servers 
appconsul_1  | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1 
appconsul_1  | ==> Starting raft data migration... 
appconsul_1  | ==> Starting Consul agent... 
appconsul_1  | ==> Starting Consul agent RPC... 
appconsul_1  | ==> Consul agent running! 
appconsul_1  |   Node name: 'node_1' 
appconsul_1  |   Datacenter: 'dc1' 
appconsul_1  |    Server: true (bootstrap: false) 
appconsul_1  |  Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400) 
appconsul_1  |  Cluster Addr: 192.168.20.10 (LAN: 8301, WAN: 8302) 
appconsul_1  |  Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false 
appconsul_1  |    Atlas: <disabled> 
appconsul_1  | 
appconsul_1  | ==> Log data will now stream in as it occurs: 
appconsul_1  | 
appconsul_1  |  2017/06/13 14:57:24 [INFO] raft: Node at 192.168.20.10:8300 [Follower] entering Follower state 
appconsul_1  |  2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1 192.168.20.10 
appconsul_1  |  2017/06/13 14:57:24 [INFO] serf: EventMemberJoin: node_1.dc1 192.168.20.10 
appconsul_1  |  2017/06/13 14:57:24 [INFO] consul: adding server node_1 (Addr: 192.168.20.10:8300) (DC: dc1) 
appconsul_1  |  2017/06/13 14:57:24 [INFO] consul: adding server node_1.dc1 (Addr: 192.168.20.10:8300) (DC: dc1) 
appconsul_1  |  2017/06/13 14:57:25 [ERR] agent: failed to sync remote state: No cluster leader 
appconsul_1  |  2017/06/13 14:57:25 [ERR] agent: failed to sync changes: No cluster leader 
appconsul_1  |  2017/06/13 14:57:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election. 
appconsul_1  |  2017/06/13 14:57:48 [ERR] agent: failed to sync remote state: No cluster leader 
appconsul_1  |  2017/06/13 14:58:13 [ERR] agent: failed to sync remote state: No cluster leader 
appconsul_1  |  2017/06/13 14:58:22 [INFO] serf: EventMemberJoin: node_2 192.168.20.11 
appconsul_1  |  2017/06/13 14:58:22 [INFO] consul: adding server node_2 (Addr: 192.168.20.11:8300) (DC: dc1) 
appconsul_1  |  2017/06/13 14:58:22 [INFO] consul: Attempting bootstrap with nodes: [192.168.20.10:8300 192.168.20.11:8300] 
appconsul_1  |  2017/06/13 14:58:23 [WARN] raft: Heartbeat timeout reached, starting election 
appconsul_1  |  2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Candidate] entering Candidate state 
appconsul_1  |  2017/06/13 14:58:23 [WARN] raft: Remote peer 192.168.20.11:8300 does not have local node 192.168.20.10:8300 as a peer 
appconsul_1  |  2017/06/13 14:58:23 [INFO] raft: Election won. Tally: 2 
appconsul_1  |  2017/06/13 14:58:23 [INFO] raft: Node at 192.168.20.10:8300 [Leader] entering Leader state 
appconsul_1  |  2017/06/13 14:58:23 [INFO] consul: cluster leadership acquired 
appconsul_1  |  2017/06/13 14:58:23 [INFO] consul: New leader elected: node_1 
appconsul_1  |  2017/06/13 14:58:23 [INFO] raft: pipelining replication to peer 192.168.20.11:8300 
appconsul_1  |  2017/06/13 14:58:23 [INFO] consul: member 'node_1' joined, marking health alive 
appconsul_1  |  2017/06/13 14:58:23 [INFO] consul: member 'node_2' joined, marking health alive 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_solr_1:8983' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8302:udp' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8500' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8300' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'consul' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_mysql_1:3306' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8400' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:53:udp' 
appconsul_1  |  2017/06/13 14:58:26 [INFO] agent: Synced service 'app:dockerdata_appconsul_1:8301:udp' 

感謝您的任何答覆

UPDATE:

我試圖刪除撰寫文件networks部分,但有同樣的問題,我解決了使用compose v1,這種配置的工作原理:

撰寫VM1(APP)

appconsul: 
    build: consul/ 
    ports: 
     - 192.168.20.10:8300:8300 
     - 192.168.20.10:8301:8301 
     - 192.168.20.10:8301:8301/udp 
     - 192.168.20.10:8302:8302 
     - 192.168.20.10:8302:8302/udp 
     - 192.168.20.10:8400:8400 
     - 192.168.20.10:8500:8500 
     - 172.32.0.1:53:53/udp 
    hostname: node_1 
    command: -server -advertise 192.168.20.10 -bootstrap-expect 2 -ui-dir /ui 

appregistrator: 
    build: registrator/ 
    hostname: app 
    command: consul://192.168.20.10:8500 
    volumes: 
     - /var/run/docker.sock:/tmp/docker.sock 
    links: 
     - appconsul 

撰寫VM2(網絡)

webconsul: 
    build: consul/ 
    ports: 
     - 192.168.20.11:8300:8300 
     - 192.168.20.11:8301:8301 
     - 192.168.20.11:8301:8301/udp 
     - 192.168.20.11:8302:8302 
     - 192.168.20.11:8302:8302/udp 
     - 192.168.20.11:8400:8400 
     - 192.168.20.11:8500:8500 
     - 172.33.0.1:53:53/udp 
    hostname: node_2 
    command: -server -advertise 192.168.20.11 -join 192.168.20.10 

webregistrator: 
    build: registrator/ 
    hostname: web 
    command: consul://192.168.20.11:8500 
    volumes: 
     - /var/run/docker.sock:/tmp/docker.sock 
    links: 
     - webconsul 

回答

-1

問題是撰寫文件的版本,V2和V3有同樣的問題,只有撰寫文件V1

+0

什麼具體工作組成文件v2/3的一部分是問題?你可以包含一個可用的撰寫文件嗎? – programmerq

+0

我已更新答案 – hellb0y77