users@tyrus.java.net

Re: Grizzly worker thread

From: Jimmy Lin <y2klyf+work_at_gmail.com>
Date: Tue, 26 Aug 2014 10:38:56 -0700

oh nm, we do store the session inside onOpen message, so we are okay on
that.


On Tue, Aug 26, 2014 at 10:29 AM, Jimmy Lin <y2klyf+work_at_gmail.com> wrote:

> hi Pavel,
> thanks for getting back to me so quick.
>
> I am not sure it is always caused by 1006, but looking at our log, i did
> see lot of 1006 onClose. Will that be the reason worker thread not cleaning
> up?
>
> >>ad second question - once Session is closed, it will remain closed
> forever; reconnect will create new session.
> oh this is interesting, we kept a reference to the session created from
> ClientManger.connetToServer call, but if reconnect will create a new
> session, our reference is basically useless after a reconnect. (?)
> Is there a way to get the latest session object ?
>
>
>
>
> On Tue, Aug 26, 2014 at 10:13 AM, Pavel Bucek <pavel.bucek_at_oracle.com>
> wrote:
>
>> Hi Jimmy,
>>
>> "it is related to how the connection was dropped" means that this is only
>> observable when 1006 close code is signaled to the client endpoint?
>>
>> ad second question - once Session is closed, it will remain closed
>> forever; reconnect will create new session.
>>
>> thanks,
>> Pavel
>>
>>
>>
>> On 26/08/14 17:45, Jimmy Lin wrote:
>>
>> hi Pavel,
>> it is kind of hard to reproduce, kind of related to how the connection
>> was dropped in the first place, but I will try again.
>>
>> I do have reconnecthandler (reconnect retry up to certain times)
>>
>> so somewhat related question..... will Session state ever change from
>> close back to running? say due to reconnect handler logic?
>> or session state will remain "running" until reconnect handler
>> "exhausted" and return false?
>>
>>
>> thanks
>>
>>
>>
>> On Tue, Aug 26, 2014 at 1:51 AM, Pavel Bucek <pavel.bucek_at_oracle.com>
>> wrote:
>>
>>> Hi Jimmy,
>>>
>>> please see inline.
>>>
>>>
>>> On 26/08/14 10:44, Jimmy Lin wrote:
>>>
>>>> Hi,
>>>> after creating(ClientManager.connectToServer) and
>>>> closing(Session.close) tyrus endpoint multiple times, we found out there
>>>> are many Grizzly(1) (2) and Selector thread left over in our VM.
>>>>
>>>> Basically every time when our application detected session's state is
>>>> close(abnormally), we will first called session.close (attempt to clean up
>>>> everything) and then create new endpoint.
>>>>
>>>> is it the best practice to clean up Grizzly worker threads in Grizzly
>>>> container/socket?
>>>>
>>>
>>> once the session is closed, it should not be necessary to invoke
>>> session.close() method - it won't do much in that case, underlying grizzly
>>> transport should be already closed.
>>>
>>>
>>> (basically assuming session.close will do all the magic, but apparently
>>>> somehow Grizzly threads still around)
>>>>
>>>>
>>>> thanks
>>>>
>>>
>>> Can you please produce minimal reproducible testcase? You can use echo
>>> sample as a starting point (it might be easier if you don't have maven
>>> project).
>>>
>>> >From what I know, this should not happen (if you are not using shared
>>> container feature [1]) - are you sure you are not "leaking" sessions to
>>> somewhere, keeping them open? Also, if you are trying to implement
>>> persistent client connection - have you considered using Tyrus
>>> ReconnectHandler [2]?
>>>
>>> Thanks,
>>> Pavel
>>>
>>> [1]
>>> https://tyrus.java.net/apidocs/1.8.2/org/glassfish/tyrus/client/ClientProperties.html#SHARED_CONTAINER
>>> [2]
>>> https://tyrus.java.net/apidocs/1.8.2/org/glassfish/tyrus/client/ClientProperties.html#RECONNECT_HANDLER
>>>
>>>
>>
>>
>