這裏的問題是corba調用不會返回,並且corba服務器停止時不會引發異常。 在我的情況下,只有一個多線程corba代理(Window),監視一個後端corba服務器。 corba服務器的IDL爲:當corba服務器進程停止時,客戶端調用被阻止
void run()
void echo();
代理通過echo心跳調用檢查後端的健康狀況。如果在echo中拋出corba異常,代理會將後端分類爲DOWN狀態。此過程在大多數時間都有效,但後端關閉。
1)如果我關閉後端機器,立即回聲拋出異常。
2)如果我停止後端corba進程,echo調用掛起並且不返回,在客戶端沒有異常。客戶不能再走向未來了。
以上兩種情況都不會發生運行調用。
具有'ORBDebugLevel 10'的日誌顯示代理完成迴應請求發送,而netstat showns在代理和後端機器之間執行一個TCP連接,儘管後端corba服務器進程停止(我承認後端服務器是無序的或編程不良)。但是作爲代理,它是如何避免由於個別調用失敗而被阻塞,如果它既不返回也不拋出異常?
下面是兩個日誌,默認策略:
TAO (276|1592) - Invocation_Adapter::invoke_i, making a TAO_CS_REMOTE_STRATEGY i
nvocation
TAO (276|1592) - Transport_Cache_Manager_T::is_entry_available_i[828], true, sta
te is ENTRY_IDLE_AND_PURGABLE
TAO (276|1592) - Cache_IntId_T::recycle_state, ENTRY_IDLE_AND_PURGABLE->ENTRY_BU
SY Transport[828] IntId=00A64ABC
TAO (276|1592) - Transport_Cache_Manager_T::find_i, Found available Transport[82
8] @hash:index {-1062676757:0}
TAO (276|1592) - Transport_Connector::connect, got an existing connected Transpo
rt[828] in role TAO_CLIENT_ROLE
TAO (276|1592) - Muxed_TMS[828]::request_id, <4>
TAO (276|1592) - GIOP_Message_Base::dump_msg, send GIOP message v1.2, 60 data by
tes, my endian, Type Request[4]
GIOP message - HEXDUMP 72 bytes
47 49 4f 50 01 02 01 00 3c 00 00 00 04 00 00 00 GIOP....<.......
03 00 00 00 00 00 cd cd 1b 00 00 00 14 01 0f 00 ................
52 53 54 00 00 00 6c 00 06 9c b5 00 00 00 00 00 RST...l.........
00 00 01 00 00 00 01 cd 05 00 00 00 65 63 68 6f ............echo
00 cd cd cd 00 00 00 00 ........
TAO (276|1592) - Transport[828]::drain_queue_helper, sending 1 buffers
TAO (276|1592) - Transport[828]::drain_queue_helper, buffer 0/1 has 72 bytes
TAO - Transport[828]::drain_queue_helper (0/72) - HEXDUMP 72 bytes
47 49 4f 50 01 02 01 00 3c 00 00 00 04 00 00 00 GIOP....<.......
03 00 00 00 00 00 cd cd 1b 00 00 00 14 01 0f 00 ................
52 53 54 00 00 00 6c 00 06 9c b5 00 00 00 00 00 RST...l.........
00 00 01 00 00 00 01 cd 05 00 00 00 65 63 68 6f ............echo
00 cd cd cd 00 00 00 00 ........
TAO (276|1592) - Transport[828]::drain_queue_helper, end of data
TAO (276|1592) - Transport[828]::cleanup_queue, byte_count = 72
TAO (276|1592) - Transport[828]::cleanup_queue, after transfer, bc = 0, all_sent
= 1, ml = 0
TAO (276|1592) - Transport[828]::drain_queue_helper, byte_count = 72, head_is_em
pty = 1
TAO (276|1592) - Transport[828]::drain_queue_i, helper retval = 1
TAO (276|1592) - Transport[828]::make_idle
TAO (276|1592) - Cache_IntId_T::recycle_state, ENTRY_BUSY->ENTRY_IDLE_AND_PURGAB
LE Transport[828] IntId=00A64ABC
TAO (276|1592) - Leader_Follower[828]::wait_for_event, (follower), cond <00B10DD
8>
使用靜態Client_Strategy_Factory " -ORBTransportMuxStrategy EXCLUSIVE "
2014-Sep-03 16:34:26.143024
TAO (6664|5612) - Invocation_Adapter::invoke_i, making a TAO_CS_REMOTE_STRATEGY
invocation
TAO (6664|5612) - Transport_Cache_Manager_T::is_entry_available_i[824], true, st
ate is ENTRY_IDLE_AND_PURGABLE
TAO (6664|5612) - Cache_IntId_T::recycle_state, ENTRY_IDLE_AND_PURGABLE->ENTRY_B
USY Transport[824] IntId=00854ABC
TAO (6664|5612) - Transport_Cache_Manager_T::find_i, Found available Transport[8
24] @hash:index {-1062667171:0}
TAO (6664|5612) - Transport_Connector::connect, got an existing connected Transp
ort[824] in role TAO_CLIENT_ROLE
TAO (6664|5612) - Exclusive_TMS::request_id - <3>
TAO (6664|5612) - GIOP_Message_Base::dump_msg, send GIOP message v1.2, 60 data b
ytes, my endian, Type Request[3]
GIOP message - HEXDUMP 72 bytes
47 49 4f 50 01 02 01 00 3c 00 00 00 03 00 00 00 GIOP....<.......
03 00 00 00 00 00 cd cd 1b 00 00 00 14 01 0f 00 ................
52 53 54 00 00 00 55 00 0d 7a 85 00 00 00 00 00 RST...U..z......
00 00 01 00 00 00 01 cd 05 00 00 00 65 63 68 6f ............echo
00 cd cd cd 00 00 00 00 ........
TAO (6664|5612) - Transport[824]::drain_queue_helper, sending 1 buffers
TAO (6664|5612) - Transport[824]::drain_queue_helper, buffer 0/1 has 72 bytes
TAO - Transport[824]::drain_queue_helper (0/72) - HEXDUMP 72 bytes
47 49 4f 50 01 02 01 00 3c 00 00 00 03 00 00 00 GIOP....<.......
03 00 00 00 00 00 cd cd 1b 00 00 00 14 01 0f 00 ................
52 53 54 00 00 00 55 00 0d 7a 85 00 00 00 00 00 RST...U..z......
00 00 01 00 00 00 01 cd 05 00 00 00 65 63 68 6f ............echo
00 cd cd cd 00 00 00 00 ........
TAO (6664|5612) - Transport[824]::drain_queue_helper, end of data
TAO (6664|5612) - Transport[824]::cleanup_queue, byte_count = 72
TAO (6664|5612) - Transport[824]::cleanup_queue, after transfer, bc = 0, all_sen
t = 1, ml = 0
TAO (6664|5612) - Transport[824]::drain_queue_helper, byte_count = 72, head_is_e
mpty = 1
TAO (6664|5612) - Transport[824]::drain_queue_i, helper retval = 1
TAO (6664|5612) - Leader_Follower[824]::wait_for_event, (follower), cond <009009
10>
我理解這可能是螺紋和ORB模型的問題。我嘗試了一些客戶的策略:
靜態Client_Strategy_Factory「-ORBTransportMuxStrategy EXCLUSIVE -ORBClientConnectionHandler RW」
這樣可以減少問題發生的頻率,但解決不了問題完整。
這和我6年前的經驗很相似。在這種情況下,調用是在客戶端的一個線程中發送的。在接收到響應之前,由於反應堆模式,該線程被重新用於發送另一個corba請求。這個案例似乎不同於這裏的案例,因爲它只是一個corba調用。我的線程堆棧的印象是有點像:
server.anotherInvocation() //the thread is used for another invocation.
...
server::echo() //send 1st corba invocation
....
orb-run()
使用此策略,如果服務器無法響應給定的超時值,客戶端是否會強制關閉連接?我應該備份以前的ORBPolicy並在調用完成後恢復它嗎?此外,「set orb policy」和「set thread policy」之間是否有區別 – 2014-09-03 23:26:01
策略備份和恢復的目的是運行調用可能會持續近60秒,而echo調用預計會完成3〜4 mill seconds。 – 2014-09-03 23:36:41
您也可以使用_set_policy_overrides在對象引用上設置策略,而不是僅爲您調用echo操作的對象引用設置該策略。 – 2014-09-04 06:41:40