That looks like a memory leak to me (constantly increasing Old
Generation to near-full), and you start really being in trouble around
46,000 seconds in (on the first one, anyway), when a full GC doesn't
empty the young generation (and thus, the old generation is full). It
looks like you are leaking from the very beginning, though, even if you
don't notice until the old generation fills up and you start running a
GC every few seconds. Since it seems unlikely that the memory leak is
GF's fault, it's almost certainly going to be application related (it's
difficult to configure things to make them leak memory =).
To try to deal with it, I'd suggest taking a jmap dump (as Scott Oaks
suggested above) and looking through that. If you aren't afraid of a bit
of command-line diving, you can forgo the memory analyzers and use jmap
-histo:live [process id] (which will print out the types, instance
counts, and memory sizes of everything in memory). You'll want to run it
once towards the beginning and once when you start having trouble, then
compare the two for significant differences. The leaking object type
will likely stick out, and give you a clue as to what in your
application isn't going away properly.
glassfish_at_javadesktop.org wrote:
> Hi JF,
>
> I had observed the GC log when the server was behaving appropriately but had not captured a GC log from when it was being problematic (since the gc.log was overwritten on server restart). After copying the GC log on the last two cpu / context-switch spikes, I think the problem is GC related. Attached are two zips, each containing a jstack dump and the gc.log when the problem was occurring.
>
> I noticed for most of the GC log looks normal (many PSYoungGen collections with intermittent Full GC), but at the end, when the CPU usage was so high, Full GC's are continually running back to back. Since the jstack dumps have shown mostly waiting threads, it seems like the GC must be responsible for the increased CPU and context-switches and eventual non-responsiveness of the server if not restarted. Any idea why this might be? Is it configuration related? Application related? Etc?
>
> Thanks for all your help so far. I can't believe I didn't notice this sooner!
>
> For reference, here are the jvm-options from domain.xml:
> -XX:+PrintGCDetails
> -Xloggc:${com.sun.aas.instanceRoot}/logs/gc.log
> -XX:HeapDumpPath=/mnt/dumps
> -XX:+HeapDumpOnOutOfMemoryError
> -server
> -Xmx2500m
> -Xms2500m
> -Xmn1000m
> -Xss128k
> -XX:+AggressiveOpts
> -XX:+AggressiveHeap
> -XX:+DisableExplicitGC
> -XX:+UseParallelGC
> -XX:+UseParallelOldGC
> -XX:ParallelGCThreads=8
> -Dcom.sun.enterprise.server.ss.ASQuickStartup=false
> -XX:MaxPermSize=192m
> -Djava.endorsed.dirs=${com.sun.aas.installRoot}/lib/endorsed
> -Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy
> -Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf
> -Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks
> -Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks
> -Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.instanceRoot}/lib/ext${path.separator}${com.sun.aa\
> s.derbyRoot}/lib
> -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver
> -Djavax.management.builder.initial=com.sun.enterprise.admin.server.core.jmx.AppServerMBeanServerBuilder
> -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory
> -Dcom.sun.enterprise.taglibs=appserv-jstl.jar,jsf-impl.jar
> -Dcom.sun.enterprise.taglisteners=jsf-impl.jar
> -XX:NewRatio=2
> [Message sent by forum member 'rwillie6' (rwillie6)]
>
> http://forums.java.net/jive/thread.jspa?messageID=341267
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>
>