Hi Jimmy,
thanks for your response and evaluation! If you can, please file a
bug/enhancement request against Tyrus at
https://java.net/jira/browse/TYRUS.
(We will definitely take a look, this does not seem like expected
behavior - we are maybe not cleaning up created ScheduledExecutorService
correctly. It is supposed to be shared among sessions/connections when
used on same client container, but we might be not closing/removing
tasks correctly when Session is closed or something like that.).
Regards,
Pavel
On 09/09/14 10:39, Jimmy Lin wrote:
> hi Pavel,
> well we sort of found a workaround solution by enabling the "heart
> beat" feature in tyrus.
> Once we turn on "heart beat" pong from client every few seconds, we
> don't see the runaway Grizzly threads anymore when a session is "closed".
>
> One slight problem we ran into using heart beat is that, it actually
> fire up 10 "tyrus" threads right away and there seems no way to config
> that particular pool size. Also, the tyrus threads in the pool won't
> go away when session is closed(supposedly when heart beat is done). We
> have to explicitly call ClientManger.shutdown to clean that up.
>
> Maybe highlight that in user guide will help future developer to do
> the right things properly.
>
> thanks
>
>
>
>
>
> On Mon, Sep 8, 2014 at 12:34 AM, Pavel Bucek <pavel.bucek_at_oracle.com
> <mailto:pavel.bucek_at_oracle.com>> wrote:
>
> I must have missed this reply, sorry.
>
> Yes, SHARED_CONTAINER is not anyhow tied to URL you are connecting
> to, so you can connect to multiple server endpoints using same
> pool of threads on client side.
>
> Do you have any reproducible testcase I could use to do my evaluation?
>
> Thanks,
> Pavel
>
>
> On 27/08/14 06:21, Jimmy Lin wrote:
>> hi Pavel,
>> still not sure why I have the leak Grizzly thread. But I
>> am trying to see if I can work around the issue by using
>> SHARED_CONTAINER.
>>
>> By looking at the source code, if I turn on SHARED_CONTAINER, it
>> will be using one and only one static
>> GrizzlyCilentSocket.transport and the worker thread pool is
>> shared. So even if I create multiple ClientManager or call
>> multiple clientManager.connectToServer with different websocket
>> url, there is no new thread pools will be created.
>>
>>
>> Each time my application need a web socket connection, it will
>> create a instance of ClientManager and call
>> clientManager.connectToServer with different websocket url.
>> Because there is only one shared static
>> GrizzlyCilentSocket.transport to used when SHARE_CONTAINER is
>> turn on, will that work even if I try to connect to different
>> websocket url from different ClientManager instance?
>>
>> thanks
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Aug 26, 2014 at 10:38 AM, Jimmy Lin
>> <y2klyf+work_at_gmail.com <mailto:y2klyf+work_at_gmail.com>> wrote:
>>
>> oh nm, we do store the session inside onOpen message, so we
>> are okay on that.
>>
>>
>> On Tue, Aug 26, 2014 at 10:29 AM, Jimmy Lin
>> <y2klyf+work_at_gmail.com <mailto:y2klyf+work_at_gmail.com>> wrote:
>>
>> hi Pavel,
>> thanks for getting back to me so quick.
>>
>> I am not sure it is always caused by 1006, but looking at
>> our log, i did see lot of 1006 onClose. Will that be the
>> reason worker thread not cleaning up?
>>
>> >>ad second question - once Session is closed, it will
>> remain closed forever; reconnect will create new session.
>> oh this is interesting, we kept a reference to the
>> session created from ClientManger.connetToServer call,
>> but if reconnect will create a new session, our reference
>> is basically useless after a reconnect. (?)
>> Is there a way to get the latest session object ?
>>
>>
>>
>>
>> On Tue, Aug 26, 2014 at 10:13 AM, Pavel Bucek
>> <pavel.bucek_at_oracle.com <mailto:pavel.bucek_at_oracle.com>>
>> wrote:
>>
>> Hi Jimmy,
>>
>> "it is related to how the connection was dropped"
>> means that this is only observable when 1006 close
>> code is signaled to the client endpoint?
>>
>> ad second question - once Session is closed, it will
>> remain closed forever; reconnect will create new session.
>>
>> thanks,
>> Pavel
>>
>>
>>
>> On 26/08/14 17:45, Jimmy Lin wrote:
>>> hi Pavel,
>>> it is kind of hard to reproduce, kind of related to
>>> how the connection was dropped in the first place,
>>> but I will try again.
>>>
>>> I do have reconnecthandler (reconnect retry up to
>>> certain times)
>>>
>>> so somewhat related question..... will Session state
>>> ever change from close back to running? say due to
>>> reconnect handler logic?
>>> or session state will remain "running" until
>>> reconnect handler "exhausted" and return false?
>>>
>>>
>>> thanks
>>>
>>>
>>>
>>> On Tue, Aug 26, 2014 at 1:51 AM, Pavel Bucek
>>> <pavel.bucek_at_oracle.com
>>> <mailto:pavel.bucek_at_oracle.com>> wrote:
>>>
>>> Hi Jimmy,
>>>
>>> please see inline.
>>>
>>>
>>> On 26/08/14 10:44, Jimmy Lin wrote:
>>>
>>> Hi,
>>> after
>>> creating(ClientManager.connectToServer) and
>>> closing(Session.close) tyrus endpoint
>>> multiple times, we found out there are many
>>> Grizzly(1) (2) and Selector thread left over
>>> in our VM.
>>>
>>> Basically every time when our application
>>> detected session's state is
>>> close(abnormally), we will first called
>>> session.close (attempt to clean up
>>> everything) and then create new endpoint.
>>>
>>> is it the best practice to clean up Grizzly
>>> worker threads in Grizzly container/socket?
>>>
>>>
>>> once the session is closed, it should not be
>>> necessary to invoke session.close() method - it
>>> won't do much in that case, underlying grizzly
>>> transport should be already closed.
>>>
>>>
>>> (basically assuming session.close will do
>>> all the magic, but apparently somehow
>>> Grizzly threads still around)
>>>
>>>
>>> thanks
>>>
>>>
>>> Can you please produce minimal reproducible
>>> testcase? You can use echo sample as a starting
>>> point (it might be easier if you don't have
>>> maven project).
>>>
>>> >From what I know, this should not happen (if
>>> you are not using shared container feature [1])
>>> - are you sure you are not "leaking" sessions to
>>> somewhere, keeping them open? Also, if you are
>>> trying to implement persistent client connection
>>> - have you considered using Tyrus
>>> ReconnectHandler [2]?
>>>
>>> Thanks,
>>> Pavel
>>>
>>> [1]
>>> https://tyrus.java.net/apidocs/1.8.2/org/glassfish/tyrus/client/ClientProperties.html#SHARED_CONTAINER
>>> [2]
>>> https://tyrus.java.net/apidocs/1.8.2/org/glassfish/tyrus/client/ClientProperties.html#RECONNECT_HANDLER
>>>
>>>
>>
>>
>>
>>
>
>