jsr356-experts@websocket-spec.java.net

[jsr356-experts] Re: RemoteEndpoint setAutoFlush() and flush()

From: Danny Coward <danny.coward_at_oracle.com>
Date: Tue, 11 Dec 2012 10:25:37 -0800

Hi Scott,

OK, I think I understand. So the idea is to allow implementations to
send messages in a batch in order to get a big performance gain for
applications that send a lot of messages in a short amount of time and
to allow an explicit way for developers to take advantage of that, if
the batching optimization is in the implementation.

I think that sort of approach already fits under the async model we
have: the async send operations allow implementations to make their own
choice about when to send the message after the async send has been
called. i.e.

sendString/sendBytes - send the message now (no batching)
sendStringByFuture() - send the message when the container decides to
(possibly batching if it chooses to)

And I think with the flush() method, we would have allowed containers
who choose to do batching under the existing model without the extra
setBatching/setAutoflush() idea ?


- Danny



On 11/29/12 12:11 PM, Scott Ferguson wrote:
> On 11/29/12 11:34 AM, Danny Coward wrote:
>> My apologies Scott, I must have missed your original request - I've
>> logged this as issue 63.
>
> Thanks.
>
>>
>> So auto flush true would require the implementation never keep
>> anything in a send buffer, false would allow it ?
>
> Not quite. It's more like auto-flush false means "I'm batching
> messages; don't bother sending if you don't have to." I don't think
> the wording should be "never", because of things like mux, or other
> server heuristics. It's more like "start the process of sending."
>
> setBatching(true) might be a better name, if that's clearer.
>
> When setBatching(false) [autoFlush=true] -- the default -- and an app
> calls sendString(), the message will be delivered (with possible
> buffering, delays, mux, optimizations, etc, depending on the
> implementation, but it will be delivered without further intervention
> from the app.)
>
> When setBatching(true) [autoFlush=false], and an app calls
> sendString(), the message might sit in the buffer forever until the
> application calls flush().
>
> sendPartialString would be unaffected by the flag; the WS
> implementation is free to do whatever it wants with partial messages.
>
> Basically, it's a hint: setBatching(true) [autoFlush=false] means "I'm
> batching a bunch of messages, so don't bother sending the data if you
> don't need to until I call flush."
>
> Does that make sense? I don't want to over-constrain implementations
> with autoFlush(true) either option. Maybe "batching" is the better
> name to avoid confusion. (But even batching=true doesn't require
> buffering. Implementations can still send fragments early if they want
> or even ignore batching=true.)
>>
>> It seems like a reasonable request - do you think the autoflush
>> property is a per-peer setting / per logical endpoint / per container
>> setting ? I'm wondering if typically developers will want to set this
>> once per application rather than keep setting it per RemoteEndpoint.
>
> I think it's on the RemoteEndpoint, like setAutoCommit for JDBC. It's
> easy to set in @WebSocketOpen, and the application might want to start
> and stop batching mode while processing.
>
> -- Scott
>
>>
>> - Danny
>>
>> On 11/28/12 3:28 PM, Scott Ferguson wrote:
>>>
>>> I'd like a setAutoFlush() and flush() on RemoteEndpoint for high
>>> performance messaging. Defaults to true, which is the current behavior.
>>>
>>> The performance difference is on the order of 5-7 times as many
>>> messages in some early micro-benchmarks. It's a big improvement and
>>> puts us near the high-speed messaging like ZeroQ.
>>
>>
>> --
>> <http://www.oracle.com> *Danny Coward *
>> Java EE
>> Oracle Corporation
>>
>


-- 
<http://www.oracle.com> 	*Danny Coward *
Java EE
Oracle Corporation