You're absolutely right, I wasn't aware that iBATIS imposed that
limitation in its default configuration state (default is 32
simeltaneous transactions).
Thanks
Scott Oaks wrote:
> It's not a grizzly configuration issue -- all of the grizzly threads
> are waiting for some resource in the ibatis transaction manager to
> become available. You can see in the thread dump that they are all
> waiting on com.ibatis.common.util.Throttle.increment, which is called
> from com.ibatis.sqlmap.engine.transaction.TransactionManager.begin.
> Not that I know anything about ibatis, but apparently they are all
> trying to start a transaction, and ibatis throttles the maximum number
> of active transactions, and so they all have to wait until a slot
> becomes free.
>
> You need to look into why ibatis isn't giving that resource to the
> waiting threads.
>
> -Scott
>
> On 05/28/09 10:48, Alex Sherwin wrote:
>> I'm attaching a thread dump...
>>
>> All of the "httpSSLWorkerThread-80-XXX" threads are in "RUNNABLE" and
>> "WAITING" states. Note that before the whole thing seems to
>> deadlock, thread pool has only instantiated about 40 threads. Right
>> at the time that this occurs, the thread pool instantiates its
>> maximum amount of threads allowed (100, you will see 100
>> httpSSLWorkerThread-80-XXX threads in this dump). All of the 100
>> threads are not doing any work at this point, and I have to shut down
>> the app server (but the app server is responsive, aside from the http
>> port 80 listener).
>>
>> The technologies involved are: Jersey 1.0.3 (which is using the
>> apache http client contrib) and iBATIS. I've tried to rule out the
>> DB as the limiting factor by upping the max # connections in the
>> connection pool and monitored the pool statistics (both which seem
>> fine).
>>
>> Any ideas?
>>
>>
>> Alex Sherwin wrote:
>>> I'm testing throughput/performance of an app that is attempting to
>>> utilize the highly multiplexed nature of Grizzly. JAX-RS Web
>>> Services are being served up with Jersey.
>>>
>>> The HTTP configuration in glassfish is configured with:
>>>
>>> Request Processing
>>> =============
>>> Thread Count: 100
>>> Initial Thread Count: 20
>>> Thread Increment: 5
>>> Request Timeout: 30
>>> Buffer Length: 8192
>>>
>>> Keep Alive
>>> ========
>>> Thread Count: 1
>>> Max Connections: 250
>>> Time Out: 30
>>>
>>> Connection Pool
>>> ============
>>> Max Pending Count: 4096
>>> Queue Size: 4096
>>> Receive Buffer Size: 4096
>>> Send Buffer Size: 8192
>>>
>>> HTTP Listener (non-SSL)
>>> =================
>>> Acceptor Threads: 10
>>> Blocking: Disabled (unchecked)
>>> Comet is enabled
>>>
>>> The glassfish domain is being run on my dev box:
>>>
>>> Glassfish v2.1 FCS
>>> Win XP SP3, JDK 1.6.0_12 (32bit)
>>>
>>> Max heap size 1024
>>> Max perm gen is 256
>>>
>>> Now, when testing I've got three to four other servers (solaris,
>>> linux boxes) running many simultaneous threads that are all making
>>> HTTP requests. When I've got about 40 simultaneous threads going,
>>> everything is fine, but once I go to about 45-50, the http worker
>>> threads seem to "freeze" up.
>>>
>>> There are no errors in my app or the domains log whatsoever (no log
>>> entries at all when it freezes up)
>>>
>>> I'm monitoring the domain with jvisualvm that ships with JDK 1.6,
>>> the heap never exceeds 160MB (which is great), perm gen never exceed
>>> 80MB and all the http worker threads (yes, all of the ones that were
>>> active) turn green, which visualvm labels as "running" (note that
>>> when they are not doing any work, they are yellow in the "wait" state
>>>
>>> I'm not a grizzly expert, so im not sure what configuration
>>> parameters should be tweaked to help scale for more concurrent
>>> connections.
>>>
>>> My requests are XML PUT requests, they are about 27KB in size each.
>>> The processing time on the server is very short lived (at most 100ms
>>> per request, but 99% of the time should be < 20ms).
>>>
>>> Any idea on what the bottleneck could be in grizzly config?
>>>
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
>>> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>>>
>>>
>>>
>>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
>> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>
>
>
>