Hi,
>>>> think it could be good idea from perf. side, but only if it will be
>>>> used right way.
>>>>
>>>> We have to be very careful with that change! Because if someone
>>>> will have complex logic in CallbackHandler.onConnect - it will lead
>>>> to big perf. problems.
>>>> For example - SSLConnectorHandler usage. Currently, in our samples
>>>> from time to time we execute *blocking*
>>>> sslConnectorHandler.handshake directly from onConnect method, which
>>>> definitely should not happen anymore!
>>>
>>> Agree this is not good for TLS as this operations is quite slow. I
>>> would think we should made it configurable then, and let the current
>>> default to spawn a thread for TLS, but not for TCP....but that might
>>> be quite confusing. Not sure what is the best solution...
>> What about making current mode as default, and make it configurable
>> by adding TCPSelectorHandler.is/setConnectOnSeparateThread() ?
>> Because I'm not sure if it will be more clear, if we will have
>> different default behaviors for TCP and SSL.
>>
>> What do you think?
>
> OK make sense. What about UDPSelectorHandler? I would think we need to
> promote the method on the ConnectorHandler so both implementation can
> turn it on/off
With UDPSelectorHandler we have different situation. Currently our UDP
implementation tries to behave the same or almost the same way TCP does.
But actually for UDP we don't need all that connect stuff. As UDP
channel is connected just after DatagramChannel.connect(..) is called.
So for UDP there is no need at all in CallbackHandler.onConnect(...).
On other side, after executing UDPConnectorHandler.connect(...) we can
not be sure that datagram channel is registered on selector, we can just
schedule registration task, which will be executed on selector thread.
But is it required for us to be notified when datagram channel becomes
registered with selector? Think for most of scenarios this info is not
required.
If we will remove TCP-like connect behavior from UDP implementation - we
can improve performance, as we will not need to redundant context
switches and return connected UDPConnectorHandler instantly.
What do you think?
WBR,
Alexey.
>
> A+
>
> -- jeanfrancois
>
>>
>> WBR,
>> Alexey.
>>>
>>>
>>>
>>>>
>>>> In our javadoc we should warn about such *new default* behavior :))
>>>>
>>>> BTW, We can try to implement this more common way (attached)? :)
>>>
>>> LOL :-) Yes I prefer your patch :-) What about exposing the
>>> runInSeparateThread properties so it can be changed directly when
>>> using the SelectorHandler? The bad side is it will require bumping
>>> the release version as we are introducing a new API....
>>>
>>> A+
>>>
>>> -- Jeanfrancois
>>>
>>>
>>>
>>>>
>>>> Thanks.
>>>>
>>>> WBR,
>>>> Alexey.
>>>>
>>>> Jeanfrancois Arcand wrote:
>>>>> Hi,
>>>>>
>>>>> we recently faced a bug in Sailfin where performance significantly
>>>>> decreased (cut&paste from Scott update on the bug):
>>>>>
>>>>>> This is not so much lock contention as it is a form of deadlock
>>>>>> that happens
>>>>>> under load. What happens is that all the SipContainer threads
>>>>>> attempt to write
>>>>>> to a socket (the sipp uas process); one of them will open the
>>>>>> socket while the
>>>>>> rest of them block waiting to enter the same code. The thread
>>>>>> that opens a
>>>>>> socket then waits for the socket connection to be completed.
>>>>>>
>>>>>> The completion of the socket connection is signaled by the
>>>>>> TCPSelectorThread,
>>>>>> which creates a callback task and executes it on the SipContainer
>>>>>> thread pool.
>>>>>> Since all of those threads are busy (blocked waiting to enter the
>>>>>> socket-writing
>>>>>> code), the callback isn't ever processed, and the stack trace
>>>>>> below results.
>>>>>> Eventually, the thread waiting for the callback will time out,
>>>>>> and things will
>>>>>> run for a few seconds until the same state is reached (and
>>>>>> eventually, there
>>>>>> will be sockets created and things will run okay).
>>>>>>
>>>>>> A possible fix for sailing is to override the TCPSelectorThread
>>>>>> and execute the
>>>>>> onConnect callback directly rather than putting that on another
>>>>>> thread.
>>>>>
>>>>> I would like to propose that we change the current behavior and
>>>>> avoid executing the OP_CONNECT on its own thread, but instead
>>>>> executing using the SelectorHandler thread. This is what we are
>>>>> already doing for OP_ACCEPT based on some performance measure we
>>>>> did for Grizzly 1.0, and based on Scott's investigation, I would
>>>>> like to change the default behavior
>>>>>
>>>>>> Index: TCPSelectorHandler.java
>>>>>> ===================================================================
>>>>>> --- TCPSelectorHandler.java (revision 712)
>>>>>> +++ TCPSelectorHandler.java (working copy)
>>>>>> @@ -697,8 +697,13 @@
>>>>>> try {
>>>>>> CallbackHandlerContextTask task =
>>>>>> CallbackHandlerContextTask.poll();
>>>>>> task.setCallBackHandler(callbackHandler);
>>>>>> - context.execute(task);
>>>>>> - } catch (PipelineFullException ex){
>>>>>> + if (context.getCurrentOpType() !=
>>>>>> Context.OpType.OP_CONNECT){
>>>>>> + context.execute(task);
>>>>>> + } else {
>>>>>> + task.setContext(context);
>>>>>> + task.call();
>>>>>> + }
>>>>>> + } catch (Exception ex){
>>>>>> throw new IOException(ex.getMessage());
>>>>>> }
>>>>>> }
>>>>>
>>>>>
>>>>> If someone rely on the current behavior (spawning a Thread), then
>>>>> the TCPSelectorHandler can always be extended to act like previous
>>>>> version.
>>>>>
>>>>> I'm interested to learn from the Corba team about how OP_CONNECT
>>>>> has been handled (if that was used).
>>>>>
>>>>> A+
>>>>>
>>>>> -- Jeanfrancois
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
>>>>> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>>>>>
>>>>
>>>> ------------------------------------------------------------------------
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
>>>> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
>>> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>