users@websocket-spec.java.net

[jsr356-users] [jsr356-experts] Re: Re: Clarification required on sending messages concurrently

From: Mark Thomas <mark_at_homeinbox.net>
Date: Tue, 03 Feb 2015 15:16:33 +0000

On 13/01/2015 12:51, Pavel Bucek wrote:
>
> Mark,
>
> can you please include references to the spec itself when making
> statements like "An ISE should be thrown"?

The text below that you quoted.

> All I see is javadoc of RemoteEndpoint.Basic:
>
> ""
> * <p>If the websocket connection underlying this RemoteEndpoint is
> busy sending a message when a call is made
> * to send another one, for example if two threads attempt to call a
> send method
> * concurrently, or if a developer attempts to send a new message
> while in the
> * middle of sending an existing one, the send method called while
> * the connection is already busy may throw an {_at_link
> java.lang.IllegalStateException}.
>
> ""
>
> which states that IllegalStateException *MAY* be thrown. Tomcat decided
> to throw an exception and some other implementation can decide to handle
> that case differently (blocking current thread until the message is
> completed/sent, ...); both approaches seem to be spec-compliant (from
> what I see).

Part of the problem is using language like may/should rather than must/will.

How is an application developer meant to write something portable when
the two implementations can have such significantly differing behaviours
and still be specification compliant?

To re-iterate my current understanding:

- The EG agreed that any attempt to send messages concurrently must
  trigger an ISE.
- I originally proposed/supported this view because Tomcat's
  implementation at the time essentially required this. Shortly
  afterwards, I re-worked Tomcat's implementation so Tomcat no longer
  required this language to be in the spec. As I stated at the time, I
  had no objection to lifting the restriction but was happy leaving it
  in place (mainly so we didn't have to address the various issues
  lifting it would create).
- Then we split up the interfaces for Sync and Async sends.
- What was originally very clear language in the Javadoc, was no longer
  clear.

What I really want at this point is clarity in the spec. My starting
point is what the EG originally agreed but I am by no means fixed on
that behaviour.

As I have stated else-thread, I'd be happy to see this restriction
lifted but to do that we need to answer the question "How do we handle
an application sending messages faster than a client is reading them?".
Blocking on the x+1th message until the xth message has been sent
impacts scalability. My own thinking is around retaining the one message
at a time limit and adding a "boolean readForNextMessage()" method so
apps can easily cycle through RemoteEndpoints and see which ones can be
written to (rather than having to track this themselves via the
SendHandlers).

Mark


> On 13/01/15 13:12, Mark Thomas wrote:
>> On 12/01/2015 18:53, Pavel Bucek wrote:
>>> Hi Mark,
>>>
>>> seems like there might be some misinterpretation in the original thread
>>> - I read the reply from Danny as "container has to send the whole
>>> message before sending another one"; it does not seem to concern async
>>> messages, because container already has the message and it will be sent
>>> (or @OnError invoked).
>>
>> The thread looks clear to me (with some edits to get to the key bits).
>>
>> Me: If an application sends a message using a Future or a SendHandler,
>> is it permitted to send a second message before the first has
>> completed?
>>
>> Danny: I think one message needs to finish before you try sending the
>> next.
>>
>> Me: My original reason for suggesting this is no longer valid. That
>> said, I'm happy keeping this restriction in place so that the
>> application must wait for the previous message to complete before
>> sending the next.
>>
>> Bill: +1
>>
>>
>> So, this discussion:
>> a) applied to async messages
>> b) applied to the application sending the message
>>
>>> Your sample can be reduced to:
>>>
>>> session.getAsyncRemote().sendText(message);
>>> session.getBasicRemote().sendText(message);
>>>
>>>
>>> and I don't think this should end up in an exception. There must be some
>>> kind of queue implemented in the container and second call will wait for
>>> first message to be sent, but that's all.
>>
>> Yes, there is a queue. Yes the above sequence could be supported.
>> However, in the discussion above the EG decided that it should not be
>> supported.
>>
>> The problem with allowing applications to send another message before
>> the previous one has finished is that there is no mechanism for the
>> container to push back on the application if it is sending messages too
>> quickly. Requiring the previous message to finish before the next is
>> sent provides (in a simple way) a mechanism for the container to push
>> back.
>>
>>> There is no API for users to get the state of current Session instance
>>> like "SENDING" or "READY_TO_SEND". When shared and used from multiple
>>> threads, it would not be easily possible to reliably send a message, if
>>> the sending state of individual messages could cause an exception.
>>
>> Yes, if an application wants to send form multiple threads the
>> application has to deal with the concurrency issues.
>>
>>> The other thing is the original usecase. I believe we should signal
>>> something when user tries to send a message in the middle of sending
>>> another partial message. I will have to investigate little bit more
>>> about this, but I can say that the RI does not do this at the moment.
>>
>> Given the above discussion, this is already handled. An ISE should be
>> thrown (and Tomcat does this).
>>
>>> Looking at this issue from broader perspective, it could be nice to have
>>> the alternative to this, something as "force send", since we might get
>>> to the state when we want to send some message, no matter what is
>>> happening in other threads. Just for the sake of completeness, here is
>>> the usecase:
>>>
>>> session.getBasicRemote().sendText(message, false); // partial
>>> session.getBasicRemote().sendText(message);
>>>
>>> (Currently, RI sends the second (whole) message as part of the previous
>>> one; or not really, it is just interpreted in that way..).
>>
>> Tomcat throws an ISE if an application tries that.
>>
>>> Following is even more evil:
>>>
>>> session.getBasicRemote().sendText(m, false); // partial text
>>> session.getBasicRemote().sendBinary(ByteBuffer.wrap(m.getBytes()));
>>>
>>> (this is interpreted in the same way as previous; if you think of the
>>> frame content and order, it makes sense that the other side interpreted
>>> it as one text message. Problem is that we might need to detect this on
>>> the sender side and throw an exception probably..)
>>
>> Again, Tomcat throws an ISE in this case.
>>
>>
>> As I stated in the original thread, I'd be happy to lift this
>> restriction (in a new version of the spec) but we'd need to think about
>> how the container could prevent the application placing too much data in
>> the queue.
>>
>> Mark
>>
>>
>>>
>>>
>>> Thanks and regards,
>>> Pavel
>>>
>>>
>>> On 12/01/15 11:21, Mark Thomas wrote:
>>>> Looking back though the EG discussions, we discussed concurrency and
>>>> agreed that one message had to complete before the next was sent
>>>> [1]. No
>>>> distinction was made between synchronous and asynchronous messages.
>>>>
>>>> The discussion in [1] happened before the split between
>>>> RemoteEndpoint.[Basic|Async] and after the split restriction described
>>>> in [1] only appears in the Javadoc for RemoteEndpoint.Basic.
>>>>
>>>> My question is this:
>>>>
>>>> Does the restriction agreed in [1] apply to async messages or not?
>>>>
>>>> I believe that the restriction does apply. Without it, there is no
>>>> mechanism for the container to signal to the application to stop
>>>> sending
>>>> messages if the client isn't reading them fast enough.
>>>>
>>>> By way of example, consider the following:
>>>> @OnMessage
>>>> public String onMessage(String message, Session session) {
>>>> received.add(message);
>>>> session.getAsyncRemote().sendText(MESSAGE_ONE);
>>>> return message;
>>>> }
>>>>
>>>>
>>>> My expectation is that, unless the async message completes very quickly
>>>> (unlikely) then an ISE will be thrown because the async message is
>>>> still
>>>> in progress when the sync message send is triggered by the return
>>>> value.
>>>>
>>>> That said, if batching is enabled then it is much more likely that the
>>>> async message would complete instantly and that the sync message would
>>>> then be sent.
>>>>
>>>> For bonus points, assuming that the batch buffer is big enough for both
>>>> the async and sync message, is the use of the return value to send the
>>>> message a trigger to flush the batch? I can seen no reason why this
>>>> should be the case but the question popped up on a Tomcat list so I
>>>> wanted to check here.
>>>>
>>>> Mark
>>>>
>>>>
>>>> [1]
>>>> https://java.net/projects/websocket-spec/lists/jsr356-experts/archive/2013-02/message/48
>>>>
>>>>
>>>>
>>