2016-12-15 101 views
0

我在widlfly上的activeMQ工作正常,以及自本週以來本週如何開始有此異常,並且相當頻繁。AMQ214013:無法解碼數據包:java.lang.IllegalArgumentException:AMQ119032:無效的類型:1

起初我去了堆棧跟蹤,並通過代碼grep查看代碼來拋出這個異常,在我看來,這個問題可能與消息的大小有關。我停止了在隊列中輸入消息的過程。並開啓了我的wild。。隊列沒有數據,但我仍然有這個異常。

我從數據目錄中刪除了activeMQ數據目錄和tx-object-store。我開始瘋狂,它有同樣的問題。我仍然得到這個例外。有一個網站(https://www.mail-archive.com/[email protected]/msg36343.html)說這可能是端口被錯誤定義。但我一直在使用這個配置幾個月,我似乎沒有改變任何東西,也沒有將Wildfly 10升級到另一個版本。所以這些庫仍然是一樣的。

任何人都可以告訴我,爲什麼當沒有數據流過activeMQ時會出現這個錯誤。我刪除了數據目錄,因此沒有任何內容被加載用於解碼,也沒有接收任何內容以便將隊列解碼。

我把阿爾特彌斯的日誌級別調高了,異常看起來像這樣,它在這個日誌中得到了什麼。我沒有給它發送任何消息來處理,但它仍在處理一些事情。

這個問題似乎來自NETTY,但似乎無法指出我的配置會出錯的地方,尤其是當我不認爲它們在任何地方發生了變化時。

2016-12-15 15:58:24,707 TRACE [org.apache.activemq.artemis.core.client] (Thread-18 (**activemq-netty-threads**-969652671)) handling packet PACKET(SessionCommitMessage)[type=43, channelID=12, packetObject=SessionCommitMessage] 
2016-12-15 15:58:24,707 DEBUG [org.apache.activemq.artemis.core.client] (Thread-18 (activemq-netty-threads-969652671)) Invocation of interceptor org.apache.activemq.artemis.core.protocol.hornetq.HQPropertiesConversionInterceptor on PACKET(SessionCommitMessage)[type=43, channelID=12, packetObject=SessionCommitMessage] returned true 
2016-12-15 15:58:24,707 TRACE [org.apache.activemq.artemis.core.server] (Thread-18 (activemq-netty-threads-969652671)) ServerSessionPacketHandler::handlePacket,PACKET(SessionCommitMessage)[type=43, channelID=12, packetObject=SessionCommitMessage] 
2016-12-15 15:58:24,707 TRACE [org.apache.activemq.artemis.core.server] (Thread-18 (activemq-netty-threads-969652671)) Calling commit 
2016-12-15 15:58:24,707 TRACE [org.apache.activemq.artemis.core.server] (Thread-18 (activemq-netty-threads-969652671)) TransactionImpl::commit::TransactionImpl [xid=null, id=31125, xid=null, state=ACTIVE, createTime=1481781503706(Thu Dec 15 15:58:23 AEST 2016), timeoutSeconds=300, nr operations = 0]@3cd1f6f3 
2016-12-15 15:58:24,708 TRACE [org.apache.activemq.artemis.core.server] (Thread-18 (activemq-netty-threads-969652671)) ServerSessionPacketHandler::scheduling response::PACKET(NullResponseMessage)[type=21, channelID=0, packetObject=NullResponseMessage] 
2016-12-15 15:58:24,708 DEBUG [org.apache.activemq.artemis.core.client] (Thread-18 (activemq-netty-threads-969652671)) Invocation of interceptor org.apache.activemq.artemis.core.protocol.hornetq.HQPropertiesConversionInterceptor on PACKET(NullResponseMessage)[type=21, channelID=0, packetObject=NullResponseMessage] returned true 
2016-12-15 15:58:24,708 TRACE [org.apache.activemq.artemis.core.client] (Thread-18 (activemq-netty-threads-969652671)) Sending packet nonblocking PACKET(NullResponseMessage)[type=21, channelID=12, packetObject=NullResponseMessage] on channeID=12 
2016-12-15 15:58:24,708 TRACE [org.apache.activemq.artemis.core.client] (Thread-18 (activemq-netty-threads-969652671)) Writing buffer for channelID=12 
2016-12-15 15:58:24,864 TRACE [org.apache.activemq.artemis.core.server] (Thread-72 (activemq-netty-threads-969652671)) Connection created org.apache.[email protected]13a92998[local= /192.168.78.30:7045, remote=/10.10.16.11:47709] 
2016-12-15 15:58:24,865 ERROR [org.apache.activemq.artemis.core.client] (Thread-72 (activemq-netty-threads-969652671)) AMQ214013: Failed to decode packet: java.lang.IllegalArgumentException: AMQ119032: Invalid type: 1 
    at org.apache.activemq.artemis.core.protocol.core.impl.PacketDecoder.decode(PacketDecoder.java:413) 
    at org.apache.activemq.artemis.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:60) 
    at org.apache.activemq.artemis.core.protocol.ServerPacketDecoder.decode(ServerPacketDecoder.java:202) 
    at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:324) 
    at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:605) 
    at org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:68) 
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) 
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) 
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) 
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) 
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) 
    at io.netty.handler.codec.ByteToMessageDecoder.handlerRemoved(ByteToMessageDecoder.java:216) 
    at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:527) 
    at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved(DefaultChannelPipeline.java:521) 
    at io.netty.channel.DefaultChannelPipeline.remove0(DefaultChannelPipeline.java:351) 
    at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:322) 
    at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:299) 
    at org.apache.activemq.artemis.core.protocol.ProtocolHandler$ProtocolDecoder.decode(ProtocolHandler.java:174) 
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:349) 
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244) 
    at org.apache.activemq.artemis.core.protocol.ProtocolHandler$ProtocolDecoder.channelRead(ProtocolHandler.java:117) 
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) 
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) 
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) 
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 

我Standalone.xml:

<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0"> 
      <server name="default"> 
       <security enabled="false"/> 
       <management jmx-enabled="true"/> 
       <journal file-size="1024000"/> 
       <statistics enabled="true"/> 
       <shared-store-master/> 
       <security-setting name="#"> 
        <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/> 
       </security-setting> 
       <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="4194304" max-size-bytes="20971520" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/> 
       <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/> 
       <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http"> 
        <param name="batch-delay" value="50"/> 
       </http-connector> 
       <remote-connector name="netty" socket-binding="messaging"/> 
       <remote-connector name="netty-throughput" socket-binding="messaging-throughput"> 
        <param name="batch-delay" value="50"/> 
       </remote-connector> 
       <in-vm-connector name="in-vm" server-id="0"/> 
       <http-acceptor name="http-acceptor" http-listener="default"/> 
       <http-acceptor name="http-acceptor-throughput" http-listener="default"> 
        <param name="batch-delay" value="50"/> 
        <param name="direct-deliver" value="false"/> 
       </http-acceptor> 
       <remote-acceptor name="netty" socket-binding="messaging"/> 
       <remote-acceptor name="netty-throughput" socket-binding="messaging-throughput"> 
        <param name="batch-delay" value="50"/> 
        <param name="direct-deliver" value="false"/> 
       </remote-acceptor> 
       <in-vm-acceptor name="in-vm" server-id="0"/> 
       <divert name="divert-to-multidm" forwarding-address="jms.queue.dbservice.multi_dm" address="jms.queue.dbservice.cap_tp_2"/> 
       <divert name="divert-to-multifo" forwarding-address="jma.queue.dbservice.multi_fo" address="jms.queue.dbservice.cl1test_odc_capfotp2"/> 
       <!--divert name="divert-to-cl1fo" forwarding-address="jma.queue.dbservice.cl1_fo" address="jms.queue.dbservice.cl1test_odc_capfotp2"/> 
       <divert name="divert-to-tmafocl1devfo" forwarding-address="jma.queue.dbservice.tmafocl1dev_fo" address="jms.queue.dbservice.cl1_test_odc_capfotp"/> 
       <divert name="divert-to-tmafocl1devdm" forwarding-address="jma.queue.dbservice.tmafocl1dev_dm" address="jms.queue.dbservice.cl1_test_odc_cap_tp"/--> 
       <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/> 
       <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/> 
       <jms-queue name="dbservice.cap_tp_2" entries="/queue/cap_tp_2"/> 
       <jms-queue name="dbservice.tp2_dm" entries="/queue/tp2_dm"/> 
       <jms-queue name="dbservice.tp2_fo" entries="/queue/tp2_fo"/> 
       <!--jms-queue name="dbservice.tmafocl1dev_fo" entries="/queue/tmafocl1dev_fo"/> 
       <jms-queue name="dbservice.tmafocl1dev_dm" entries="/queue/tmafocl1dev_dm"/--> 
       <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/> 
       <jms-queue name="dbservice.tmafocl1dev_dm" entries="/queue/tmafocl1dev_dm"/--> 
       <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/> 
       <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" /> 
       <pooled-connection-factory name="hornetq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" min-large-message-size="10240"/> 
      </server> 
     </subsystem> 


    <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> 
     <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> 
     <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/> 
     <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/> 
     <socket-binding name="http" port="${jboss.http.port:8080}"/> 
     <socket-binding name="https" port="${jboss.https.port:9443}"/> 
     <socket-binding name="iiop" interface="unsecure" port="3528"/> 
     <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/> 
     <socket-binding name="messaging" port="5445"/> 
     <socket-binding name="messaging-backup" port="5545"/> 
     <socket-binding name="messaging-throughput" port="5455"/> 
     <socket-binding name="txn-recovery-environment" port="4712"/> 
     <socket-binding name="txn-status-manager" port="4713"/> 
     <socket-binding name="node2-jms-broker" port="${node2.broker.port:5445}"/> 
     <socket-binding name="node2throughput-jms-broker" port="${node2throughput.broker.port:5455}"/> 
     <outbound-socket-binding name="mail-smtp"> 
      <remote-destination host="localhost" port="25"/> 
     </outbound-socket-binding> 
    </socket-binding-group> 

回答

0

的問題是,我ActiveNMQ隊列中只寫隊列,這是我的應用程序只能把數據上的隊列,我們​​從來沒有預計到會有任何數據放在他們身上。我們的活動MQ服務器是單向的,只是將消息放入其他應用程序消耗的隊列中。

實際情況是,在嘗試一些,在消費端的開發人員創建了一個ActiveMQ的服務器,而不是創建一個客戶從他們的GlassFish隊列讀的,我認爲他們是推動一些消息。我應該查看隊列中的消息,但在此之前解決了問題。所以我有一個在兩端都有服務器的隊列(不知何故),另一端的activeMQ正在給我發送一些消息。我們從activefish中刪除了glassfish的配置,這很好。

道歉回答這麼晚。應該已經發布了這個爲其他誰設置服務器試圖把消息放在消費隊列中,生產者會拋出這個錯誤。