I'm not that familiar with the inner-things of glassfish, but still
interested in it.
Which version of Glassfish are you testing? Afaik their are miles
between v2 and v3.
Also, have you seen this blog post
http://jfarcand.wordpress.com/2009/11/27/putting-glassfish-v3-in-production-essential-surviving-guide/
On Wed, May 12, 2010 at 1:28 PM, <glassfish_at_javadesktop.org> wrote:
> Hi
>
> We have experienced a bunch of problems in glassfish when trying to scale the server with the number of CPUs in our hardware, which currently is an 8-core system.
>
> We have observed a lot of thread contention in various hot mutex-locks in the server. The hottest being the lock guarding the http acceptor queue and the other guarding the jdbc connection pool. The observation was made by means of thread dumps and calculations using java.lang.management.ThreadInfo.getBlockedTime() (block-time exceeds user-time many times over).
>
> Quote from "Java Concurrency in Practice", by Brian Goetz
>
> [i]"Modern JVMs can optimize uncontended lock acquisition and release fairly effectiveley, but if multiple threads request the lock at the same time the JVM enlists the help of the operating system. If it gets to this point, some unfortunate thread will be suspended and have to be resumed later. When that thread is resumed, it may have to wait for other threads to finish ther scheduling quanta before it is actually scheduled. Suspending and resuming a thread has a lot of overhead and generally entails a lengthly interruption. For lock-based classes with fine-grained operations (such as synchronized collection classes, where most methods contain only a few operations), the ratio of scheduling overhead to useful work can be quite high when the lock is frequently contended."[/i]
>
> Indeed there is no suprise that a single queue accepting all work into the server naturally would become a bottleneck (as opposed to a queue-per-thread), aswell as a single connection pool (especially as number of cores grow).
>
> The effect is we cannot load the CPU more than to 65 % which means that there is roughly 8x25 = 200% processing power to spare.
>
> We have also executed some tests that would show us how glassfish scale with number of cores, so we tried them with 2, 4, 6 and 8 cores. The result was:
>
> - Doubling number of cores from 2 to 4 gives around 70% gain in throughput.
> - Doubling number of cores from 4 to 8 gives around 45% gain in throughput.
>
> So throughput-gain is almost halved going from 4-to-8 cores compared going from 2-to-4 cores for both testcases. 37000-42000 context switches per second.
>
> There is also high latency oscillation on the response times during high load (still 65% CPU load) , which probably means that we are about to hit the roof.
>
> Judging from the text and test results and observations above there is probably not a lot of those 65% CPU-cycles spent on useful work.
>
> Is anyone familiar with these kind of problems? Any advice on how to get around them?
>
> I heard that some people use multiple JVMs per machine, is that usually how glassfish is deployed?
>
> In order to reduce chance of resource exhaustion during our tests, the configuration was optimized to have enough ejb instances and jdbc connections to serve all 200 workerthreads in the server. 1 acceptor thread per core.
>
> Cheers,
> -Kristoffer
> [Message sent by forum member 'ekrisjo']
>
> http://forums.java.net/jive/thread.jspa?messageID=469522
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>
>
--
Dominik Dorn
http://dominikdorn.com
Tausche Deine Lernunterlagen auf http://www.studyguru.eu !