當使用JGroups,使用Infinispan等組件時,需要在防火牆中打開哪些端口?Infinispan jgroups防火牆端口
我的JGroups CONFIGRATION文件是:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
<UDP
mcast_addr="${test.jgroups.udp.mcast_addr:239.255.0.1}"
mcast_port="${test.jgroups.udp.mcast_port:46655}"
bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"
bind_port="${test.jgroups.bind.port:46656}"
port_range="${test.jgroups.bind.port.range:20}"
tos="8"
ucast_recv_buf_size="25M"
ucast_send_buf_size="1M"
mcast_recv_buf_size="25M"
mcast_send_buf_size="1M"
loopback="true"
max_bundle_size="64k"
ip_ttl="${test.jgroups.udp.ip_ttl:2}"
enable_diagnostics="false"
bundler_type="old"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.min_threads="3"
thread_pool.max_threads="10"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="4"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="1"
internal_thread_pool.max_threads="4"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="10000"
internal_thread_pool.rejection_policy="Discard"
/>
<PING timeout="3000" num_initial_members="3"/>
<MERGE2 max_interval="30000" min_interval="10000"/>
<FD_SOCK bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}" start_port="${test.jgroups.fd.sock.start.port:9780}" port_range="${test.jgroups.fd.sock.port.range:10}" />
<FD_ALL timeout="15000" interval="3000" />
<VERIFY_SUSPECT timeout="1500" bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"/>
<pbcast.NAKACK2
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"/>
<UNICAST3
xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<UFC max_credits="2m" min_threshold="0.40"/>
<MFC max_credits="2m" min_threshold="0.40"/>
<FRAG2 frag_size="30k" />
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
現在,我有如下的防火牆開放的端口(鏈INPUT(政策接受))
ACCEPT udp -- anywhere anywhere multiport dports 46655:46676 /* 205 ipv4 */ state NEW
ACCEPT tcp -- anywhere anywhere multiport dports 9780:9790 /* 204 ipv4 */ state NEW
但仍在運行的Infinispan嵌入式緩存數後分我得到
2014-11-05 18:21:38.025 DEBUG org.jgroups.protocols.FD_ALL - haven't received a heartbeat from <hostname>-47289 for 15960 ms, adding it to suspect list
它工作正常,如果我關掉iptables 在此先感謝
感謝您的回覆,我打開上面指定的相同端口,並且我一直打開它,但它仍然無法工作。在我的環境中,當兩個刀片上的eth0接口處於活動狀態時,我在同一個機箱上安裝了帶有Ip綁定(已配置模式1)的兩個刀片,然後一切正常。當eth0的是活躍在一個刀片和eth1是活躍在其他羣集,然後獲得創建和一段時間後它開始拋出「沒有收到心跳」異常並沒有進程離開集羣甚至很長一段時間(小時) – 2014-11-06 16:58:52