dev@glassfish.java.net

Re: Glassfish Tuning

From: William Fretts-Saxton <William.Fretts.Saxton_at_Sun.COM>
Date: Thu, 31 Jan 2008 15:17:42 -0500

IO seems to be fine and not bottlenecking. This is being run on 2,
local, ZFS striped/mirrored drives:

# iostat -xnz 5
.
.

                   extended device statistics
     r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    19.6 158.8 1079.8 4451.1 0.0 1.4 0.0 8.0 0 23 c0t2d0
    29.0 158.8 1604.4 4451.1 0.0 1.6 0.0 8.6 0 27 c0t3d0
    28.4 143.6 1599.3 4367.0 0.0 0.9 0.0 5.4 0 22 c0t4d0
    27.2 143.6 1489.7 4367.0 0.0 1.1 0.0 6.2 0 26 c0t5d0

RrdDB is the one that opens, modifies, and closes the 80 files per
client connection.

I can try writing to /tmp sometime next week, as I'll have to upgrade
the memory on the machine. It only has about 8 GB memory and the files
total to about 8 GB themselves.

Thanks.

-Bill


Scott Oaks wrote:
> Hi Bill --
>
> I looked at your jstacks, and the glassfish threads aren't blocked;
> they're just all doing I/O. So from the appserver perspective, that all
> appears to be working normally.
>
> Given the difference you report below in the number of files used, I'd
> guess that your system is becoming I/O bound. I'm not sure what the
> org.rrd4j.core.RrdDb package is, but I'd guess it's probably doing lots
> of synchronous writes to the filesystem, and once you reach a threshold,
> everything gets into increasing contention and increasing time. Have you
> run iostat -xn during the test, and does it show that I/O might be a
> problem?
>
> Where is it writing the files? Can you try putting the files written
> into /tmp and see if that solves the issue? Then we'd know that the I/O
> is the problem (and then to fix it, you could look into faster disks,
> ZFS, or a disk array with a disk cache).
>
> -Scott
>
> On Thu, 2008-01-31 at 12:15, William Fretts-Saxton wrote:
>> Hi Scott,
>>
>> I've combined your idea of using jstack, with Harsha's previous
>> recommendation on lowering the number of files written to. I did 3
>> runs, using 20 files, 42 files, and 83 files written to, to benchmark,
>> and came up with some information.
>>
>> Firstly, I've attached the "jstack" output for each of these runs
>> (jstack.zip). If you could take a look @ them, I'd appreciate it.
>> Files with .0 suffix is before the run began, .1 is while the number of
>> clients goes to about 100, and .2 and .3 are while the client load is
>> peaking. Please let me know if you find anything.
>>
>> As far as web service call times, I found the following:
>>
>> Average call to web service:
>>
>> 20 files: .2 seconds
>> 42 files: 3.1 seconds
>> 83 files: 61 seconds
>>
>> As you can see, the increase is astronomical.
>>
>> Please let me know if any more information is helpful.
>>
>> -Bill
>>
>> Scott Oaks wrote:
>>> Assuming that you have tuned the thread pools correctly (and it sounds
>>> like you've looked at that), then something is causing a bottleneck. It
>>> sounds almost like there's some sort of deadlock between your client(s)
>>> and server.
>>>
>>> One easy trick to find this is to make sure that you're using JDK 6
>>> (6_04 is best), and then to use the JDK 6 jstack command to get the
>>> stack of the appserver. [You can use various tools, including asadmin,
>>> to get stack dumps, but jstack will be fastest.] If threads are all
>>> blocking on a resource or something, you can usually tell from the
>>> stack dump: all of the GrizzlyWorkerThreads will be doing something
>>> other than calling waitForIoTask (or a similarly named method, depending
>>> on which precise version you're running).
>>>
>>> You can send me a stack dump if you have questions about it.
>>>
>>> -Scott
>>>
>>> On Wed, 2008-01-30 at 18:57, William Saxton wrote:
>>>> Hi all,
>>>>
>>>> Since the EJB issues I was having earlier appear to be due to a known
>>>> bug, I've switched over to using a Web Service to handle my application
>>>> server needs. The usage is the same, I have about 700 clients accessing
>>>> an application server, every 5 minutes, so around 3-4 clients will be
>>>> accessing it every second.
>>>>
>>>> It takes less than 1 second for a client to send the web service the
>>>> XMLified data, the application server to save the data to about 80
>>>> different files, and for it to return an "ok". When I turn on all of
>>>> the clients, though, it takes anywhere from 10 seconds to over 5 minutes
>>>> (which I timeout) for the request to be completed.
>>>>
>>>> I've tried these JVM tweaks:
>>>> http://weblogs.java.net/blog/sdo/archive/2007/12/a_glassfish_tun.html
>>>> as well as setting the HTTP acceptor threads to 16 and
>>>> request-processing threads to 16. These don't seem to have done
>>>> anything. There is very little CPU usage (~90% idle) and little local
>>>> disk I/O (where the data is saved) so I don't know where the bottleneck
>>>> is. I'm wondering whether the application server is simply queuing up
>>>> too many of the client requests when it could be handling more.
>>>>
>>>> Anyone have an ideas? Since I'm internal to Sun, I can always give
>>>> someone access to the app server if they wanted to do a quick glance.
>>>>
>>>> Thanks.
>>>>
>>>> -Bill
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscribe_at_glassfish.dev.java.net
>>>> For additional commands, e-mail: dev-help_at_glassfish.dev.java.net
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscribe_at_glassfish.dev.java.net
>>> For additional commands, e-mail: dev-help_at_glassfish.dev.java.net
>>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe_at_glassfish.dev.java.net
>> For additional commands, e-mail: dev-help_at_glassfish.dev.java.net
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe_at_glassfish.dev.java.net
> For additional commands, e-mail: dev-help_at_glassfish.dev.java.net
>