users@grizzly.java.net

Re: Comet - weird memory behaviour

From: gustav trede <gustav.trede_at_gmail.com>
Date: Wed, 29 Apr 2009 16:14:10 +0200

2009/4/29 FredrikJ <fredrik_at_robotsociety.com>

>
>
> gustav trede-4 wrote:
> >
> > whats of interest is:
> > Is here a problem at all , is the max number of connections multiplied
> > with
> > the object size a real problem ?
> >
>
> Well, I would say that the undesired behavior (from my view) is that the
> memory is never released. The fact that you during burst in concurrent
> requests will allocate memory in order to cater for them and then cache the
> objects to handle continued load is ok my book.
>
> However, since the caches never decreases in size the current
> implementation
> makes the assumption that it is ok to greedily hold on to memory in case of
> another burst.
>
> For instance, in my scenarios we can experience short time bursts of
> requests and in the tests we can see that the caches grabs 256MB ram (and
> this is still a small scale test). This memory is now allocated by the
> caches until a reboot and is unavailable by any other business logic.
>
> Assume that we would have other frameworks in the service stack that use
> the
> same strategy, soon enough we would have a old gen space filled up to 1GB
> just sitting there.
>
> I have far too little insight in the underpinnings to suggest anything, but
> perhaps some sort of idle-time-based eviction policy could be used here.
>
> And once again, this is probably not critical for me and since I lack
> domain
> knowledge of the actual implementation in grizzly I may just be plain wrong
> ;)
>

Well your observations are correct !
If you had looked 6 months ago you would have noticed alot more caching of
other object types, that piled up despite that the caching itself was a
performance loss.
The problem is alot more limited now, thats the perspective i see it from.

Even if using a soft reference cache, the spec leaves it up to the JVM
implementation to do as it see fit regarding when they can be reclaimed as
long as its done before throwing an out of memory exception.

Server JVMs tends to allocate more memory if needed, and only clear soft
references if thats the only way to prevent an out of memory exception,
 or it it can treated them as weak and reclaim the entire cache at first GC.

It might be the case that soft references is a win for enough usage cases
(JVM impl and config), and its ok that its a loss for others, i dont know /

The downloadable glassfish distros starts with client jvm config by default,
client mode tends to treat soft as weak in some JVM implementations.
Real life cases like that can and will cause unexpected performance
regressions for soft caches.

I will add a soft reference case to my object cache benchmark using
different concurrent datastructes just to check the overhead, to see
how much it raises the limit for what objects that are not worth to cache.