2017-05-27 143 views
1

我正在運行使用docker羣集的Hazelcast集羣。儘管節點建立連接運行Hazelcast集羣的Docker羣集模式

Members [1] {                         
     Member [10.0.0.3]:5701 - b5fae3e3-0727-4bfd-8eb1-82706256ba2d this         
}                            

May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService        
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Hazelcast will connect to Hazelcast Management Center on address: 
http://10.0.0.3:8080/mancenter                    
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService        
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Failed to pull tasks from management center      
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService        
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Failed to connect to:http://10.0.0.3:8080/mancenter/collector.do 
May 27, 2017 2:38:12 PM com.hazelcast.core.LifecycleService             
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] [10.0.0.3]:5701 is STARTED           
May 27, 2017 2:38:12 PM com.hazelcast.internal.partition.impl.PartitionStateManager       
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Initializing cluster partition table arrangement...    
May 27, 2017 2:38:19 PM com.hazelcast.internal.cluster.ClusterService           
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8]                 

Members [2] {                         
     Member [10.0.0.3]:5701 - b5fae3e3-0727-4bfd-8eb1-82706256ba2d this         
     Member [10.0.0.4]:5701 - b3bd51d4-9366-45f0-bb66-78e67b13268c           
}                            

May 27, 2017 2:38:19 PM com.hazelcast.internal.partition.impl.MigrationManager        
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Re-partitioning cluster data... Migration queue size: 271   
May 27, 2017 2:38:21 PM com.hazelcast.internal.partition.InternalPartitionService        
而我不斷收到錯誤後

WARNING: [10.0.0.3]:5701 [kpts-cluster] [3.8] Wrong bind request from [10.0.0.3]:5701! This node is not requested endpoint: [10.0.0.2]:5701 
May 27, 2017 2:45:06 PM com.hazelcast.nio.tcp.TcpIpConnection 
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Connection[id=18, /10.0.0.3:5701->/10.0.0.3:49575, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [10.0.0.3]:5701! This node is not requested endpoint: [10.0.0.2]:5701 
May 27, 2017 2:45:06 PM com.hazelcast.nio.tcp.TcpIpConnection 
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Connection[id=17, /10.0.0.2:49575->/10.0.0.2:5701, endpoint=[10.0.0.2]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side 

我想這必須做一些事情的每個節點上網卡eth0。有分配的2個地址 - 集羣管理器的一個「真實」和一個「假」 ...由於某種原因,它被標榜爲一個端點...

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
    inet 127.0.0.1/8 scope host lo 
     valid_lft forever preferred_lft forever 
    inet6 ::1/128 scope host 
     valid_lft forever preferred_lft forever 
82: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff 
    inet 10.0.0.3/24 scope global eth0 
     valid_lft forever preferred_lft forever 
    inet 10.0.0.2/32 scope global eth0 
     valid_lft forever preferred_lft forever 
    inet6 fe80::42:aff:fe00:3/64 scope link 
     valid_lft forever preferred_lft forever 
84: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff 
    inet 172.18.0.3/16 scope global eth1 
     valid_lft forever preferred_lft forever 
    inet6 fe80::42:acff:fe12:3/64 scope link 
     valid_lft forever preferred_lft forever 
86: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:0a:ff:00:07 brd ff:ff:ff:ff:ff:ff 
    inet 10.255.0.7/16 scope global eth2 
     valid_lft forever preferred_lft forever 
    inet 10.255.0.6/32 scope global eth2 
     valid_lft forever preferred_lft forever 
    inet6 fe80::42:aff:feff:7/64 scope link 
     valid_lft forever preferred_lft forever 

下面是一個讀取網絡配置節點:

[                         
    {                        
     "Name": "hazelcast-net",                 
     "Id": "ly1p50ykwjhf68k88220gxih6",               
     "Created": "2017-05-27T16:38:04.638580169+02:00",           
     "Scope": "swarm",                   
     "Driver": "overlay",                  
     "EnableIPv6": false,                  
     "IPAM": {                     
      "Driver": "default",                 
      "Options": null,                  
      "Config": [                    
       {                     
        "Subnet": "10.0.0.0/24",              
        "Gateway": "10.0.0.1"               
       }                     
      ]                      
     },                       
     "Internal": false,                   
     "Attachable": true,                   
     "Containers": {                    
      "0fa2bd8f8e8e931e1140e2d4bee1b43ff1f7bd5e3049d95e9176c63fa9f47e4f": {     
       "Name": "kpts.1zhprrumdjvenkl4cvsc7bt40.2ugiv46ubar8utnxc5hko1hdf",     
       "EndpointID": "0c5681aebbacd27672c300742077a460c07a081d113c2238f4c707def735ebec", 
       "MacAddress": "02:42:0a:00:00:03",             
       "IPv4Address": "10.0.0.3/24",              
       "IPv6Address": ""                 
      }                      
     },                       
     "Options": {                    
      "com.docker.network.driver.overlay.vxlanid_list": "4097"        
     },                       
     "Labels": {},                    
     "Peers": [                     
      {                      
       "Name": "c4-6f6cd87e898f",               
       "IP": "10.6.225.34"                 
      },                      
      {                      
       "Name": "c5-77d9f542efe8",               
       "IP": "10.6.225.35"                 
      }                      
     ]                       
    }                        
] 
+0

嗨,你可以分享你的Docker/Compose文件和啓動腳本嗎? –

回答

0

你會發現這個問題以前有用:

Docker networking - "This node is not requested endpoint" error #4537

現在更重要的一點。你有一個健全的連接工作,這就是爲什麼節點能夠加入;但是,最有可能的(因爲我沒有hazelcast.xml)綁定到所有接口,因此您希望將網絡綁定更改爲僅綁定到所需的地址。我們默認綁定到*,因爲我們不知道您想要使用哪個網絡。

希望這會有所幫助,

0

嘗試使用docker swarm discovery SPI。它將提供完全獲取Hazelcast擺脫這個老大難問題與接口選擇和「這個節點沒有請求端點」錯誤羣定製AddressPicker實現。」我真希望他們能解決這個問題

https://github.com/bitsofinfo/hazelcast-docker-swarm-discovery-spi

import org.bitsofinfo.hazelcast.discovery.docker.swarm.SwarmAddressPicker; 
... 

Config conf =new ClasspathXmlConfig("yourHzConfig.xml"); 

NodeContext nodeContext = new DefaultNodeContext() { 
    @Override 
    public AddressPicker createAddressPicker(Node node) { 
     return new SwarmAddressPicker(new ILogger() { 
      // you provide the impl... or use provided "SystemPrintLogger" 
     }); 
    } 
}; 

HazelcastInstance hazelcastInstance = HazelcastInstanceFactory 
     .newHazelcastInstance(conf,"myAppName",nodeContext); 
+0

請指出你是推薦工具的作者吧? –

+1

我是該工具的作者! – bitsofinfo