users@websocket-spec.java.net

[jsr356-users] [jsr356-experts] Re: Re: Clarification required on sending messages concurrently

From: Pavel Bucek <pavel.bucek_at_oracle.com>
Date: Tue, 13 Jan 2015 13:51:55 +0100

Mark,

can you please include references to the spec itself when making
statements like "An ISE should be thrown"?

All I see is javadoc of RemoteEndpoint.Basic:

""
     * <p>If the websocket connection underlying this RemoteEndpoint is
busy sending a message when a call is made
     * to send another one, for example if two threads attempt to call a
send method
     * concurrently, or if a developer attempts to send a new message
while in the
     * middle of sending an existing one, the send method called while
     * the connection is already busy may throw an {_at_link
java.lang.IllegalStateException}.

""

which states that IllegalStateException *MAY* be thrown. Tomcat decided
to throw an exception and some other implementation can decide to handle
that case differently (blocking current thread until the message is
completed/sent, ...); both approaches seem to be spec-compliant (from
what I see).

Pavel


On 13/01/15 13:12, Mark Thomas wrote:
> On 12/01/2015 18:53, Pavel Bucek wrote:
>> Hi Mark,
>>
>> seems like there might be some misinterpretation in the original thread
>> - I read the reply from Danny as "container has to send the whole
>> message before sending another one"; it does not seem to concern async
>> messages, because container already has the message and it will be sent
>> (or @OnError invoked).
>
> The thread looks clear to me (with some edits to get to the key bits).
>
> Me: If an application sends a message using a Future or a SendHandler,
> is it permitted to send a second message before the first has
> completed?
>
> Danny: I think one message needs to finish before you try sending the
> next.
>
> Me: My original reason for suggesting this is no longer valid. That
> said, I'm happy keeping this restriction in place so that the
> application must wait for the previous message to complete before
> sending the next.
>
> Bill: +1
>
>
> So, this discussion:
> a) applied to async messages
> b) applied to the application sending the message
>
>> Your sample can be reduced to:
>>
>> session.getAsyncRemote().sendText(message);
>> session.getBasicRemote().sendText(message);
>>
>>
>> and I don't think this should end up in an exception. There must be some
>> kind of queue implemented in the container and second call will wait for
>> first message to be sent, but that's all.
>
> Yes, there is a queue. Yes the above sequence could be supported.
> However, in the discussion above the EG decided that it should not be
> supported.
>
> The problem with allowing applications to send another message before
> the previous one has finished is that there is no mechanism for the
> container to push back on the application if it is sending messages too
> quickly. Requiring the previous message to finish before the next is
> sent provides (in a simple way) a mechanism for the container to push back.
>
>> There is no API for users to get the state of current Session instance
>> like "SENDING" or "READY_TO_SEND". When shared and used from multiple
>> threads, it would not be easily possible to reliably send a message, if
>> the sending state of individual messages could cause an exception.
>
> Yes, if an application wants to send form multiple threads the
> application has to deal with the concurrency issues.
>
>> The other thing is the original usecase. I believe we should signal
>> something when user tries to send a message in the middle of sending
>> another partial message. I will have to investigate little bit more
>> about this, but I can say that the RI does not do this at the moment.
>
> Given the above discussion, this is already handled. An ISE should be
> thrown (and Tomcat does this).
>
>> Looking at this issue from broader perspective, it could be nice to have
>> the alternative to this, something as "force send", since we might get
>> to the state when we want to send some message, no matter what is
>> happening in other threads. Just for the sake of completeness, here is
>> the usecase:
>>
>> session.getBasicRemote().sendText(message, false); // partial
>> session.getBasicRemote().sendText(message);
>>
>> (Currently, RI sends the second (whole) message as part of the previous
>> one; or not really, it is just interpreted in that way..).
>
> Tomcat throws an ISE if an application tries that.
>
>> Following is even more evil:
>>
>> session.getBasicRemote().sendText(m, false); // partial text
>> session.getBasicRemote().sendBinary(ByteBuffer.wrap(m.getBytes()));
>>
>> (this is interpreted in the same way as previous; if you think of the
>> frame content and order, it makes sense that the other side interpreted
>> it as one text message. Problem is that we might need to detect this on
>> the sender side and throw an exception probably..)
>
> Again, Tomcat throws an ISE in this case.
>
>
> As I stated in the original thread, I'd be happy to lift this
> restriction (in a new version of the spec) but we'd need to think about
> how the container could prevent the application placing too much data in
> the queue.
>
> Mark
>
>
>>
>>
>> Thanks and regards,
>> Pavel
>>
>>
>> On 12/01/15 11:21, Mark Thomas wrote:
>>> Looking back though the EG discussions, we discussed concurrency and
>>> agreed that one message had to complete before the next was sent [1]. No
>>> distinction was made between synchronous and asynchronous messages.
>>>
>>> The discussion in [1] happened before the split between
>>> RemoteEndpoint.[Basic|Async] and after the split restriction described
>>> in [1] only appears in the Javadoc for RemoteEndpoint.Basic.
>>>
>>> My question is this:
>>>
>>> Does the restriction agreed in [1] apply to async messages or not?
>>>
>>> I believe that the restriction does apply. Without it, there is no
>>> mechanism for the container to signal to the application to stop sending
>>> messages if the client isn't reading them fast enough.
>>>
>>> By way of example, consider the following:
>>> @OnMessage
>>> public String onMessage(String message, Session session) {
>>> received.add(message);
>>> session.getAsyncRemote().sendText(MESSAGE_ONE);
>>> return message;
>>> }
>>>
>>>
>>> My expectation is that, unless the async message completes very quickly
>>> (unlikely) then an ISE will be thrown because the async message is still
>>> in progress when the sync message send is triggered by the return value.
>>>
>>> That said, if batching is enabled then it is much more likely that the
>>> async message would complete instantly and that the sync message would
>>> then be sent.
>>>
>>> For bonus points, assuming that the batch buffer is big enough for both
>>> the async and sync message, is the use of the return value to send the
>>> message a trigger to flush the batch? I can seen no reason why this
>>> should be the case but the question popped up on a Tomcat list so I
>>> wanted to check here.
>>>
>>> Mark
>>>
>>>
>>> [1]
>>> https://java.net/projects/websocket-spec/lists/jsr356-experts/archive/2013-02/message/48
>>>
>>>
>