jsr356-experts@websocket-spec.java.net

[jsr356-experts] Re: [jsr356-users] Re: Clarification required on sending messages concurrently

From: Bill Wigger <wigger_at_us.ibm.com>
Date: Tue, 13 Jan 2015 16:46:03 -0500

concernng this code fragment:
session.getAsyncRemote().sendText(message);
session.getBasicRemote().sendText(message);

From an API perspective, I don't think you can rely on this working across
all implementations.

getAsyncRemote().sendText(message) javadoc says:
"This method returns before the message is transmitted. Developers use the
returned Future object to track progress of the transmission."

So the API implementation has not necessarily sent this out the box (or
completed it from a batching standpoint, if batching is being used), and
due to asynchronous threading may not do so before the getBasicRemote
().sendText(message) is invoked on the current thread, at which point an
IllegalStateException or IOException would be thrown.

The intent of the API is to track the async sendText with the Future
object, once it signals that it is complete, then more messages can be
sent.

Bill Wigger




From: Joakim Erdfelt <joakim_at_intalio.com>
To: jsr356-experts_at_websocket-spec.java.net
Date: 01/13/2015 01:30 PM
Subject: [jsr356-experts] Re: [jsr356-users] Re: Clarification required
            on sending messages concurrently



Some information from the Jetty camp ...

  Your sample can be reduced to:

  session.getAsyncRemote().sendText(message);
  session.getBasicRemote().sendText(message);


  and I don't think this should end up in an exception. There must be some
  kind of queue implemented in the container and second call will wait for
  first message to be sent, but that's all. Consider Session class javadoc:

Jetty does not throw an exception in this case.
The 2 messages are queued, and notified in the "has been sent" case.

"has been sent" having different meanings depending on RemoteEndpoint.Basic
vs RemoteEndpoint.Async and batching behavior.

If batching is not enabled, then "has been sent" means the message (or
partial message) has left the java network stack.
If batching *is* enabled, then "has been sent" for RemoteEndpoint.Basic
means it entered the batch, and RemoteEndpoint.Async has no change and
still notifies the Future<Void> or ResultHandler when the message has left
the java network stack.



  * Session objects may be called by multiple threads. Implementations must
  * ensure the integrity of the mutable properties of the session under
  such circumstances.


  There is no API for users to get the state of current Session instance
  like "SENDING" or "READY_TO_SEND". When shared and used from multiple
  threads, it would not be easily possible to reliably send a message, if
  the sending state of individual messages could cause an exception.

This is the Servlet API approach.
This approach for write backpressure is based on listener + mother may i +
write.
Don't be too quick to embrace this approach for write backpressure, as that
was written with streams in mind.  Websocket being message frame based
makes this approach awkward.



  The other thing is the original usecase. I believe we should signal
  something when user tries to send a message in the middle of sending
  another partial message. I will have to investigate little bit more about
  this, but I can say that the RI does not do this at the moment.

  Looking at this issue from broader perspective, it could be nice to have
  the alternative to this, something as "force send", since we might get to
  the state when we want to send some message, no matter what is happening
  in other threads. Just for the sake of completeness, here is the usecase:

      session.getBasicRemote().sendText(message, false); // partial
      session.getBasicRemote().sendText(message);

  (Currently, RI sends the second (whole) message as part of the previous
  one; or not really, it is just interpreted in that way..).

Jetty fails this with an ISE (mixing of partial and whole behavior)



  Following is even more evil:

      session.getBasicRemote().sendText(m, false); // partial text
      session.getBasicRemote().sendBinary(ByteBuffer.wrap(m.getBytes()));

  (this is interpreted in the same way as previous; if you think of the
  frame content and order, it makes sense that the other side interpreted
  it as one text message. Problem is that we might need to detect this on
  the sender side and throw an exception probably..)


Jetty fails this with an ISE (cannot start new type binary when active
message is text)





  On 12/01/15 11:21, Mark Thomas wrote:
   Looking back though the EG discussions, we discussed concurrency and
   agreed that one message had to complete before the next was sent [1]. No
   distinction was made between synchronous and asynchronous messages.

   The discussion in [1] happened before the split between
   RemoteEndpoint.[Basic|Async] and after the split restriction described
   in [1] only appears in the Javadoc for RemoteEndpoint.Basic.

   My question is this:

   Does the restriction agreed in [1] apply to async messages or not?

   I believe that the restriction does apply. Without it, there is no
   mechanism for the container to signal to the application to stop sending
   messages if the client isn't reading them fast enough.

   By way of example, consider the following:
   @OnMessage
   public String onMessage(String message, Session session) {
        received.add(message);
        session.getAsyncRemote().sendText(MESSAGE_ONE);
        return message;
   }


   My expectation is that, unless the async message completes very quickly
   (unlikely) then an ISE will be thrown because the async message is still
   in progress when the sync message send is triggered by the return value.

   That said, if batching is enabled then it is much more likely that the
   async message would complete instantly and that the sync message would
   then be sent.


Batching with RemoteEndpoint.Basic is an abomination :-)

Consider this ...

    RemoteEndpoint.Basic basic = session.getBasicRemote();
    basic.setBatchingAllowed(true);
    basic.sendText("Author:Mark Twain");
    basic.sendText(loadBookAsString("A Connecticut Yankee in King Arthurs
Court.txt"));
    basic.sendText("Published:1889");
    basic.flushBatch();

In the above example, if there was an error during the send of the book
text, there would just be an IOException indicating a write failure.
No knowledge about the fact that it was the book that failed, or that the
"published" message wasn't sent is known.

Guess you could expect an onError or even a onClose, but you would still
have no knowledge of what worked and what didn't.

Aside: The javadoc for flushBatch() should also indicate that it is a
blocking call.

Batching really should have been relegated to RemoteEndpoint.Async only.



   For bonus points, assuming that the batch buffer is big enough for both
   the async and sync message, is the use of the return value to send the
   message a trigger to flush the batch? I can seen no reason why this
   should be the case but the question popped up on a Tomcat list so I
   wanted to check here.

In Jetty, return value is still added to the queue, flushing the queue is
different step.

If batching is allowed, the queue flushed when it gets full, or a
flushBatch() is called.
If batching is not allowed, the queue is flushed whenever the remote
endpoint's nio layer reports its capable of being written to, resulting in
a nio gathered write of the bytebuffers representing the post-extension
frames.

There is a 3rd case, where async is used quickly, batching or no batching,
if the queue grows too big (determined by a combination of frame count and
frame data size), then subsequent async calls are rejected with an ISE("Too
many async sends, queue full")




   Mark


   [1]
   https://java.net/projects/websocket-spec/lists/jsr356-experts/archive/2013-02/message/48







graycol.gif
(image/gif attachment: graycol.gif)