users@glassfish.java.net

Re: All request processing threads are busy and connections count increasing

From: Jeanfrancois Arcand <Jeanfrancois.Arcand_at_Sun.COM>
Date: Mon, 23 Mar 2009 15:49:42 +0100

Salut,

Jagadish Prasath Ramu wrote:
> http://forums.java.net/jive/servlet/JiveServlet/download/56-59168-338326-8122/glassfish-20032009.tdump
>
> I do not see toplink talking the database driver in any of the threads.
> It is only at the cache level.

Ok to summarize (at lest for this thread):

(1) Sometimes (as observed in other thread) we lock on:

>
>> "httpSSLWorkerThread-8080-9" daemon prio=10 tid=0x00000000477aec00 nid=0x1870 waiting for monitor entry [0x0000000042327000..0x0000000042329b10]
>> java.lang.Thread.State: BLOCKED (on object monitor)
>> at com.sun.enterprise.resource.AbstractResourcePool.startConnectionLeakTracing(AbstractResourcePool.java:309)
>> - locked <0x00002aaabc809058> (a java.lang.Object)
>> at com.sun.enterprise.resource.AbstractResourcePool.setResourceStateToBusy(AbstractResourcePool.java:302)
>> at com.sun.enterprise.resource.AbstractResourcePool.getUnenlistedResource(AbstractResourcePool.java:688)
>> at com.sun.enterprise.resource.AbstractResourcePool.internalGetResource(AbstractResourcePool.java:606)
>> at com.sun.enterprise.resource.AbstractResourcePool.getResource(AbstractResourcePool.java:455)
>> at com.sun.enterprise.resource.PoolManagerImpl.getResourceFromPool(PoolManagerImpl.java:248)
>> at com.sun.enterprise.resource.PoolManagerImpl.getResource(PoolManagerImpl.java:176)
>> at com.sun.enterprise.connectors.ConnectionManagerImpl.internalGetConnection(ConnectionManagerImpl.java:327)
>> at com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:189)
>> at com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:165)
>> at com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:158)
>> at com.sun.gjc.spi.base.DataSource.getConnection(DataSource.java:108)
>> at oracle.toplink.essentials.jndi.JNDIConnector.connect(JNDIConnector.java:145)

Now you think this is never an issue, right? At least we need to find
why we are observing this quite often when it locks. It might be the
timeframe we took the thread dump, but I admit I see those really often
when "hangs" are observed. In v3 those threads will be killed by Grizzly
is the wait/block for too long to avoid thread starvation.

(2) Top link dead lock

>>
>>> "httpSSLWorkerThread-8080-18" daemon prio=10 tid=0x00002aab0495ac00
>> nid=0x4f29 waiting on condition
>> [0x0000000047aeb000..0x0000000047aecc10]
>>> java.lang.Thread.State: TIMED_WAITING (sleeping)
>>> at java.lang.Thread.sleep(Native Method)
>>> at
>> oracle.toplink.essentials.internal.helper.ConcurrencyManager.releaseDeferredLock(ConcurrencyManager.java:429)
>>> at
>> oracle.toplink.essentials.internal.identitymaps.CacheKey.releaseDeferredLock(CacheKey.java:373)
>>> at
>> oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:614)
>>> at

That one I've never seems before. Can someone from the Toplink team take
a look? Thread.sleep is never a good idea IMO...

Thanks

-- Jeanfrancois