users@grizzly.java.net

Re: Memory Usage in Grizzly on Glassfish 3.1

From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
Date: Fri, 29 Jun 2012 16:16:36 +0200

When memory consumption stops to increase, do you see anything
suspicious in the GF server.log, any errors?

Thanks.

WBR,
Alexey.

On 06/28/2012 06:39 PM, Eric Luong wrote:
> Yes, all active at the same time.
>
> On Thu, Jun 28, 2012 at 10:17 AM, Oleksiy Stashok
> <oleksiy.stashok_at_oracle.com <mailto:oleksiy.stashok_at_oracle.com>> wrote:
>
>
> On 06/28/2012 06:13 PM, Eric Luong wrote:
>> OK, I will check it out while in the different states. However,
>> I am not sure there is actually a leak. What I'm seeing is that
>> memory usage seems to plateau after a certain number of clients,
>> rather than continuing to increase linearly. I'm trying to
>> understand why this happens.
> Just to make sure, all those clients (2000+) are active (in the
> long-polling state) at the same time, right?
>
> WBR,
> Alexey.
>
>
>>
>> Thanks,
>>
>> Eric
>>
>> On Thu, Jun 28, 2012 at 9:57 AM, Oleksiy Stashok
>> <oleksiy.stashok_at_oracle.com <mailto:oleksiy.stashok_at_oracle.com>>
>> wrote:
>>
>> Hi Eric,
>>
>> I think you're on the right way.
>> See inline...
>>
>>
>> On 06/28/2012 04:40 AM, Eric Luong wrote:
>>> Any suggestions on what to look for with jmap/jhat? I'm
>>> unfamiliar with the tools. When I run jhat on a heap dump,
>>> it comes up with a fair number of warnings -- "Failed to
>>> resolve object id [some id]" and "Failed to resolve object
>>> id [some id] for field value (signature L)". Not sure what
>>> these mean. It might be worth mentioning that I had to use
>>> -F on jmap; without it, I get the "target process is not
>>> responding".
>> That's correct.
>>
>>
>>>
>>> Just looking at this snapshot (which is not near the plateau
>>> I described), it seems the class with the largest total size
>>> is "class [C" which is obviously not a useful class name.
>> It's character arrays.
>> I just took snapshot w/ jmap and opened in jhat.
>> In the browser I clicked "Heap histogram", now I see types
>> sorted by number of instances and memory consumption.
>>
>> When you investigate the problem, pls. take a snapshot when
>> issue occurs and check the heap histogram. Most probably the
>> problematic object (which leaks) would be on the top of the
>> list.
>>
>>
>>
>>> Does that mean it is some anonymous class? Exploring it
>>> further, it only lists Object as its superclass. There are
>>> several other such nameless classes. Other information for
>>> these classes (ClassLoader, Signers, Protection Domain) are
>>> all null. Other classes that seem to consume the majority
>>> of memory are HashMap, HashMapEntry and related classes.
>> Most of the time char[] is at the top of the list, different
>> collections like HashMap also.
>> Pls. take a snapshot when the memory issue occurs and check
>> the heap histogram, may be you'll find some suspicious object
>> on the top. If not - you can share the snapshot file, I can
>> try to help.
>>
>>
>>>
>>> Can you suggest how to approach this using jmap/jhat, and if
>>> those warnings I received are of any significance?
>> I also see warnings from jhat, but IMO it's still ok.
>>
>> You can also check another profiler tools like the one Chris
>> suggested (visual vm).
>>
>> Thanks.
>>
>> WBR,
>> Alexey.
>>
>>
>>> Then I will proceed with checking out different memory state
>>> snapshots.
>>>
>>> Thanks for your help,
>>>
>>> Eric
>>>
>>> On Mon, Jun 25, 2012 at 6:37 AM, Oleksiy Stashok
>>> <oleksiy.stashok_at_oracle.com
>>> <mailto:oleksiy.stashok_at_oracle.com>> wrote:
>>>
>>> Hi Eric,
>>>
>>> it would be great if you can monitor the memory usage
>>> using some profiler tool or jmap/jhat and give us more
>>> details on objects (object types), which consume memory.
>>> You can take several memory state snapshots like
>>> initial, 1000 clients connected, 2000 clients connected,
>>> 3000 clients connected, so we can see the dynamics and
>>> figure out the problem (if there is any).
>>>
>>> Thanks.
>>>
>>> WBR,
>>> Alexey.
>>>
>>>
>>> On 06/23/2012 04:48 AM, Eric Luong wrote:
>>>
>>> Hello,
>>>
>>> I've been testing Grizzly in Glassfish 3.1 by
>>> simulating a load of thousands of clients using long
>>> polling. I found what appears to be a bottleneck.
>>> As more clients are added, the memory usage of the
>>> server increases. However, after a certain point
>>> (2000 clients in my case, but I expect it would
>>> depend on hardware) the usage seems to taper off and
>>> plateau.
>>>
>>> I have been trying to figure out why the memory
>>> usage plateaus. Can anyone offer some insights, or
>>> suggest a way to investigate further?
>>>
>>> Thank-you!
>>>
>>> Eric
>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>