dev@glassfish.java.net

Re: Too many threads created in GlassFish DAS

From: Tom Mueller <tom.mueller_at_oracle.com>
Date: Wed, 09 Mar 2011 09:16:46 -0600

Sorry, I didn't realize you were using 2.1.1 in your initial message.
Is it possible for you to use 3.1 where this is fixed? The clustering
infrastructure has been completely reimplemented for 3.1, and it avoids
the use of these extra threads when instances are created.

If you need this resolved for 2.1.1, I would recommend contacting Oracle
support to see what can be done. With the clustering architecture in
2.1.1, this use of threads may be unavoidable.

Tom


On 3/9/2011 6:37 AM, glassfish.lu wrote:
> Hi Tom:
> Sorry for confusing you without any background information.
> Actually I used GlassFish v2.1.1 for deployment.
> I created a cluster with serveral instances.
> No applications are deployed and monitoring is disabled.
> If you connect to GlassFish with JConsole.
> You can find many JMX and RMI related threads
> Especially *{*JMX server connection timeout}*thread *
> **
> Thanks for your feedback.
> 2011-03-09
> ------------------------------------------------------------------------
> glassfish.lu
> ------------------------------------------------------------------------
> *发件人:* Tom Mueller
> *发送时间:* 2011-03-08 23:13:26
> *收件人:* dev
> *抄送:*
> *主题:* Re: Too many threads created in GlassFish DAS
> Can you please provide more information about how you producing this
> behavior. What specific commands are you executing, starting from a
> fresh install? Are you enabling monitoring? Are the instances in a
> cluster? Have applications been deployed? Are the instances on
> different servers?
>
> I started with a fresh install. The DAS had 50 threads (based on
> jstack output). After creating the first instance, the number of
> threads went up to 65 (which is expected as the first asadmin command
> that is executed will do this).Then I created 19 more instances using
> create-local-instance and the number of threads in the DAS was 67.
>
> Then I started 10 of the instances and the number of DAS threads was
> down to 56. After enabling monitoring on all of the instances, the
> number of threads went up to 72 but later went back down to 67.
> Note: these were all stand-alone instances, not clustered instances
> and everything was on one system.
>
> If 4 threads were created for every instance, I would expect to see
> either 80 (based on the number created) or 40 (based on the number
> started) additional threads, but I'm not seeing that.
>
> Tom
>
>
>
>
> On 3/8/2011 6:14 AM, glassfish.lu wrote:
>> Hi, All:
>> I find the number of the active threads in GlassFish DAS will
>> increase greatly as new server instance is created.
>> such as:
>> *For each server instance, a {ClientNotifForwarder} thread will be
>> created.*
>>
>> "ClientNotifForwarder-20" daemon prio=1
>> tid=0x0000000050b108e0 nid=0x6f27 runnable
>> [0x00000000560af000..0x00000000560afb10]
>> at java.net.SocketInputStream.socketRead0(Native Method)
>> at java.net.SocketInputStream.read(SocketInputStream.java:129)
>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:235)
>> - locked <0x00002ab056932048> (a java.io.BufferedInputStream)
>> at java.io.DataInputStream.readByte(DataInputStream.java:241)
>> at
>> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:189)
>> at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:126)
>> at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>> at
>> javax.management.remote.rmi.RMIConnectionImpl_Stub.fetchNotifications(Unknown
>> Source)
>> at
>> javax.management.remote.rmi.RMIConnector$RMINotifClient.fetchNotifs(RMIConnector.java:1285)
>> at
>> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchNotifs(ClientNotifForwarder.java:559)
>> at
>> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:441)
>> at
>> com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:422)
>> at
>> com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:88)
>>
>> *For each server instance, a {*JMX server connection timeout}*thread
>> will be created.*
>>
>> "JMX server connection timeout 463" daemon prio=1
>> tid=0x0000000050963310 nid=0x6f1e in Object.wait()
>> [0x0000000055baa000..0x0000000055baac90]
>> at java.lang.Object.wait(Native Method)
>> - waiting on <0x00002ab056201dc0> (a [I)
>> at
>> com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:150)
>> - locked <0x00002ab056201dc0> (a [I)
>> at java.lang.Thread.run(Thread.java:595)
>>
>> *For each server instance, a {*RMI RenewClean }*thread will be
>> created.*
>>
>> "RMI RenewClean-[192.168.2.151:33057]" daemon prio=1
>> tid=0x0000000051370380 nid=0x6e94 in Object.wait()
>> [0x0000000054796000..0x0000000054796c90]
>> at java.lang.Object.wait(Native Method)
>> - waiting on <0x00002ab03c1d8730> (a
>> java.lang.ref.ReferenceQueue$Lock)
>> at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:120)
>> - locked <0x00002ab03c1d8730> (a
>> java.lang.ref.ReferenceQueue$Lock)
>> at
>> sun.rmi.transport.DGCClient$EndpointEntry$RenewCleanThread.run(DGCClient.java:501)
>> at java.lang.Thread.run(Thread.java:595)
>>
>> *And some {RMI ConnectionExpiration} thread will also be created.*
>>
>> "RMI ConnectionExpiration-[192.168.2.151:41022]" daemon
>> prio=1 tid=0x00000000514ecbb0 nid=0x73b6 waiting on condition
>> [0x000000004d5cd000..0x000000004d5cdb10]
>> at java.lang.Thread.sleep(Native Method)
>> at
>> sun.rmi.transport.tcp.TCPChannel$Reaper.run(TCPChannel.java:446)
>> at java.lang.Thread.run(Thread.java:595)
>>
>> As the number of instances managed by DAS grow, the thread
>> created by JVM will grow accordingly .
>> Sometimes the number will exceed the thread limit of JVM.
>> So my question now is how to avoid this situation, how to reduce
>> these threads.
>> Is there any system property configuration or configuration in
>> domain.xml to workaroud this.
>> Thanks guys in advance.
>>
>> Best Regards.
>>