Ah, great! But I'm still curious about the traceback. Both my and
Ronald Kuczek logs contain this trace:
[#|2008-05-30T12:04:43.119+0200|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=26;_ThreadName=Timer-1;_RequestID=33af9590-57b2-49cd-b9c5-c50fb2ac9174;|
java.lang.NullPointerException
at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031)
at com.sun.jbi.management.system.AutoAdminTask.performAutoInstall(AutoAdminTask.java:329)
at com.sun.jbi.management.system.AutoAdminTask.performAutoFunctions(AutoAdminTask.java:288)
at com.sun.jbi.management.system.AdminService.heartBeat(AdminService.java:964)
at com.sun.jbi.management.system.AdminService.handleNotification(AdminService.java:197)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1732)
at javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:257)
at javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:322)
at javax.management.NotificationBroadcasterSupport$1.execute(NotificationBroadcasterSupport.java:307)
at javax.management.NotificationBroadcasterSupport.sendNotification(NotificationBroadcasterSupport.java:229)
The traces are identical right down to the exact line numbers. How is
it possible that locking grizzly threads leads to
NullPointerExceptions from AutoAdminTask? I haven't been able to
reproduce the issue in over a month now so I can't create a jstack
report yet.
2009/1/26 Ryan de Laplante <ryan_at_ijws.com>:
> It was determined that there is not a problem with GlassFish. Either the
> application code or one of its dependencies (such as a JDBC driver) is
> locking the grizzly worker thread. Once all five worker threads are locked
> up you experience a lockup. When you experience a lockup, run the following
> command:
>
> asadmin generate-jvm-report --type=thread > thread_dump.txt
>
> Examine the file for blocked threads and see what they are waiting on. The
> file is basically a stack dump for each thread. That is how I found what
> was locking my application and fixed it.
>
> A second lockup I recently encountered was also my fault. My app was
> executing an SQL query with no WHERE clause on a many gigabyte sized
> database table. It took two months to figure out what was happening. JPA
> caches every entity it loads by default, and never expires them. I disabled
> JPA caching and fixed the query to not allow it to return many gigabytes of
> results.
>
>
> Ryan
>
>
> glassfish_at_javadesktop.org wrote:
>>
>> Hi,
>>
>> Did this problem ever get resolved? I have run into the exact same problem
>> as Ronald with my GlassFish server.
>> -Dcom.sun.enterprise.server.ss.ASQuickStartup=false is set and ulimit for
>> file descriptors is 1024.
>> peppeme in this thread
>> http://forums.java.net/jive/message.jspa?messageID=286954#286954 suggested
>> to raise ulimit to 65536, but I doubt that should help. It should only delay
>> the problem because something, either glassfish or the domain running on it
>> must be leaking file descriptors it seem.
>> If patches exist, has they been integrated into GlassFish or do you need a
>> support contract for that?
>> [Message sent by forum member 'bjourne' (bjourne)]
>>
>> http://forums.java.net/jive/thread.jspa?messageID=328160
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
>> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>>
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>
>
--
mvh Björn