users@glassfish.java.net

Problems with OpenMQ since v2.1/OpenMQ 4.3 upgrade

From: Alex Sherwin <alex.sherwin_at_acadiasoft.com>
Date: Tue, 03 Feb 2009 16:19:59 -0500

We've been working on an application utilizing Glassfish/OpenMQ and had
previously done all development on v2ur2 with OpenMQ 4.2.

I've never seen any issues with the broker stability until we upgraded
to glassfish v2.1 with OpenMQ 4.3. After a period of time when the
application is running, eventually an error like this will showup in the
log:


[#|2009-02-03T16:08:02.437-0500|WARNING|sun-appserver2.1|javax.enterprise.system.stream.err|_ThreadID=101;_ThreadName=p:
thread-pool-1; w: 92;_RequestID=2f637138-b35e-4985-b71e-205e9f119192;|
MQRA:OMR:run:onMessage caught
Throwable-before/on/afterDelivery:Class=javax.ejb.EJBExceptionMsg=conduit-master_DEV_ROUTING_M1:AdminMessageBean:
message-driven bean invocation closed by container|#]

[#|2009-02-03T16:08:02.437-0500|WARNING|sun-appserver2.1|javax.enterprise.system.stream.err|_ThreadID=101;_ThreadName=p:
thread-pool-1; w: 92;_RequestID=2f637138-b35e-4985-b71e-205e9f119192;|
javax.ejb.EJBException: conduit-master_DEV_ROUTING_M1:AdminMessageBean:
message-driven bean invocation closed by container
 at
com.sun.ejb.containers.MessageBeanContainer.beforeMessageDelivery(MessageBeanContainer.java:1001)
 at
com.sun.ejb.containers.MessageBeanListenerImpl.beforeMessageDelivery(MessageBeanListenerImpl.java:70)
 at
com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:135)
 at $Proxy81.beforeDelivery(Unknown Source)
 at com.sun.messaging.jms.ra.OnMessageRunner.run(OnMessageRunner.java:245)
 at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:76)
 at
com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555)
|#]

And some odd behavior is now observed. If i do a "list dst" on the
OpenMQ broker, I will see something like this:

-------------------------
Host Primary Port
-------------------------
localhost 8076

-------------------------------------------------------------------------------------------------------------------
              Name Type State Producers
Consumers Msgs
                                                   Total Wildcard
Total Wildcard Count Remote UnAck Avg Size
-------------------------------------------------------------------------------------------------------------------
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
2 - 11 0 11 1717.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
1 - 0 0 0 0.0
mq.sys.dmq Queue RUNNING 0 -
0 - 0 0 0 0.0

Where the number of consumers has climbed from 1 to 2 for the queue
which had the exception, and those messages will remain "stuck" and
unconsumed, even though the application is still running and as far as i
can tell, that MDB is still consuming new messages.

Can anyone explain how this behavior could occur, or what kind of
configuration settings I could utilize that would force the connection
to be re-established in this kind of an error so the application can
self-repair and continue to function correctly?

Again, this never happened on glassfish v2ur2