Hi,
This has been observed on both Windows XP and Windows Server 2008
machines, using the final release build of glassfish v2.1 b60e. Both
running under JDK 1.6.0_10 (Windows XP is 32bit JVM, Windows Server 2008
is on 64bit JVM)
The output for one of the destinations using query dst looks like this:
-------------------------
Host Primary Port
-------------------------
localhost 8076
Destination Name jms_DEV_ROUTING_C1_AdminQueue
Destination Type Queue
Destination State RUNNING
Created Administratively true
Current Number of Messages
Actual 32
Remote 0
Held in Transaction 0
Current Message Bytes
Actual 87788
Remote 0
Held in Transaction 0
Current Number of Producers 0
Current Number of Active Consumers 1
Current Number of Backup Consumers 0
Max Number of Messages unlimited (-1)
Max Total Message Bytes unlimited (-1)
Max Bytes per Message unlimited (-1)
Max Number of Producers 100
Max Number of Active Consumers unlimited (-1)
Max Number of Backup Consumers 0
Limit Behavior REJECT_NEWEST
Consumer Flow Limit 1000
Is Local Destination false
Local Delivery is Preferred false
Use Dead Message Queue true
XML schema validation enabled false
XML schema URI List -
Reload XML schema on failure false
Successfully queried the destination.
The imq log.txt has no errors, and a few warnings throughout the day
that look like this:
[03/Feb/2009:16:53:21 EST] WARNING [B2181]: Removing 1 messages
associated with destination
temporary_destination://queue/192.168.2.57/3860/1 [Queue]
However, these warnings do not appear to correspond to times when the
queues/mdbs begin to act up, they appear whenever a shutdown of the
broker occurs (initiated by glassfish domain stopping)
The JMS Service is configured as LOCAL in the domain, and only some
basic properties have been changed, such as:
- address list iterations to the max of 2147483638 (-1 is supposed to be
unlimited, but an error is thrown when attempting to set it to this)
- reconnect attempts to -1
- reconnect interval to 60 seconds
- auto create queues and topics is disabled
Connection factories:
- fail all connections to true
- connection validation required to true
- transaction support to XATransaction
Is it possible this behavior is occuring due to thread thrashing or lack
of available threads? Besides upgrading to glassfish v2.1, we recently
turned on the icefaces server-push using grizzly comet, and as I've been
trying to diagnose this problem, i see that there are a quite a bit of
threads being taken up by http, https, quartz scheduler and icefaces
render threads.
The current testing has 4 messaging ears, each with 3 queues, and about
6 web apps deployed into a single glassfish domain, but the default
thread pool size is still at 200.
Performance for everything else is still fine, i.e. http/https requests,
jdbc resources, etc.
Linda Schneider wrote:
> Hi,
>
> I was forward your mail from the glassfish users alias. Obviously I'm
> going to need to add myself to that alias.
>
> I'll send back a response to the alias after I'm added, but until then
> I was hoping to get a little more information.
>
> * Are you using the final FCS bits for V2.1 ? (There was a bug on the
> glassfish integration side which was fixed very late in the release
> which could cause a similar issue).
>
> * Any chance you can get me the output of:
>
> imqcmd query dst -d <queue with an issue> -t q
>
> imqcmd is located in <glassfish>/imq/bin
>
> * Also any exceptions (grep for WARNING and ERROR in the log file) the
> file is located in:
> <glassfish>/imq/var/instances/<brokername>/log/log.txt
>
> Thanks,
>
> Linda
>
>
>
> Begin forwarded message:
>
>> From: Alex Sherwin <alex.sherwin_at_acadiasoft.com>
>> Date: February 3, 2009 10:19:59 PM CEST
>> To: Glassfish Users <users_at_glassfish.dev.java.net>
>> Subject: Problems with OpenMQ since v2.1/OpenMQ 4.3 upgrade
>> Reply-To: users_at_glassfish.dev.java.net
>>
>> We've been working on an application utilizing Glassfish/OpenMQ and
>> had previously done all development on v2ur2 with OpenMQ 4.2.
>>
>> I've never seen any issues with the broker stability until we
>> upgraded to glassfish v2.1 with OpenMQ 4.3. After a period of time
>> when the application is running, eventually an error like this will
>> showup in the log:
>>
>>
>> [#|2009-02-03T16:08:02.437-0500|WARNING|sun-appserver2.1|
>> javax.enterprise.system.stream.err|_ThreadID=101;_ThreadName=p:
>> thread-pool-1; w: 92;_RequestID=2f637138-b35e-4985-b71e-205e9f119192;|
>> MQRA:OMR:run:onMessage caught Throwable-before/on/
>> afterDelivery:Class=javax.ejb.EJBExceptionMsg=conduit-
>> master_DEV_ROUTING_M1:AdminMessageBean: message-driven bean
>> invocation closed by container|#]
>>
>> [#|2009-02-03T16:08:02.437-0500|WARNING|sun-appserver2.1|
>> javax.enterprise.system.stream.err|_ThreadID=101;_ThreadName=p:
>> thread-pool-1; w: 92;_RequestID=2f637138-b35e-4985-b71e-205e9f119192;|
>> javax.ejb.EJBException: conduit-
>> master_DEV_ROUTING_M1:AdminMessageBean: message-driven bean
>> invocation closed by container
>> at com .sun .ejb .containers .MessageBeanContainer
>> .beforeMessageDelivery(MessageBeanContainer.java:1001)
>> at com .sun .ejb .containers .MessageBeanListenerImpl
>> .beforeMessageDelivery(MessageBeanListenerImpl.java:70)
>> at com .sun .enterprise .connectors .inflow
>> .MessageEndpointInvocationHandler
>> .invoke(MessageEndpointInvocationHandler.java:135)
>> at $Proxy81.beforeDelivery(Unknown Source)
>> at com.sun.messaging.jms.ra.OnMessageRunner.run(OnMessageRunner.java:
>> 245)
>> at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:76)
>> at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl
>> $WorkerThread.run(ThreadPoolImpl.java:555)
>> |#]
>>
>> And some odd behavior is now observed. If i do a "list dst" on the
>> OpenMQ broker, I will see something like this:
>>
>> -------------------------
>> Host Primary Port
>> -------------------------
>> localhost 8076
>>
>> -------------------------------------------------------------------------------------------------------------------
>>
>> Name Type State Producers
>> Consumers Msgs
>> Total Wildcard
>> Total Wildcard Count Remote UnAck Avg Size
>> -------------------------------------------------------------------------------------------------------------------
>>
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 2 - 11 0 11 1717.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> jms_PROPRIETARY_QUEUE_NAME Queue RUNNING 0 -
>> 1 - 0 0 0 0.0
>> mq.sys.dmq Queue RUNNING 0 -
>> 0 - 0 0 0 0.0
>>
>> Where the number of consumers has climbed from 1 to 2 for the queue
>> which had the exception, and those messages will remain "stuck" and
>> unconsumed, even though the application is still running and as far
>> as i can tell, that MDB is still consuming new messages.
>>
>> Can anyone explain how this behavior could occur, or what kind of
>> configuration settings I could utilize that would force the
>> connection to be re-established in this kind of an error so the
>> application can self-repair and continue to function correctly?
>>
>> Again, this never happened on glassfish v2ur2
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
>> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>>
>
>
>
>