jsr340-experts@servlet-spec.java.net

[jsr340-experts] Re: Async IO and Upgrade proposal updated

From: Remy Maucherat <rmaucher_at_redhat.com>
Date: Thu, 29 Mar 2012 11:18:30 +0200

On Thu, 2012-03-29 at 11:00 +1100, Greg Wilkins wrote:
> On 29 March 2012 02:11, Remy Maucherat <rmaucher_at_redhat.com> wrote:
> > On Thu, 2012-03-29 at 01:23 +1100, Greg Wilkins wrote:
> >> > Am not fully sure what you are suggesting here. We could use a ByteBuffer
> >> > instead
> >> > and provide a write(ByteBuffer) if that helps. Is that what you mean? But
> >> > you are still
> >> > buffering no matter what IMHO.
> >>
> >> A ByteBuffer helps a little bit, because it gives the caller a
> >> position index to help remember how much has been written, but I agree
> >> it is not the solution.
> >>
> >>
> >> I see two ways to resolve this:
> >>
> >> a) remove the concept of how many bytes can be written without
> >> blocking from the API. Just have a call back that says more data can
> >> be written and return the bytes written from every call. This still
> >> requires the application to track the partially written content, but
> >> at least it removes the requirement to have internal buffering in the
> >> implementation.
> >
> > It is probably a good thing then that it is not what should be done ...
> > Of course, you probably know it, the only purpose of this is to make b)
> > look simpler I suppose.
>
> I did not put this up as a straw man. This is a perfectly valid way
> to do async IO APIs and I consider it the normal way of doing async...
> at least for the last 20 years that I've been doing such stuff. I
> have no efficiency concerns about such an API, but I am just a little
> hesitant about exposing such an API to application developers. But
> if that is not seen as a problem, then this is a good way to go.

Looking about this new reply makes me doubt your sincerity a bit.

> > I use blocking semantics, but any bytes that cannot be immediately
> > written are buffered by the container, while the flag that indicates
> > that writing can continue is of course flipped.
>
> So if you use blocking semantics, you defeat the purpose of having Async IO
> If you buffer/queue the bytes internally, then you defeat the TCP/IP
> back pressure flow control mechanisms that are meant to stop a
> producer creating data faster than it can be transported.

I keep the rest of the buffer, say it was written (so that the
application does not have to deal with it), and flip the canWrite flag
to false (so that the application knows it should stop writing until
told otherwise using a callback). When it is possible to write without
blocking (so when the socket polling says so) the remaining bytes will
be written, the canWrite flag is flipped back to true, and the
application will get its notification. There is NO internal backlogging.

I can give an example for better understanding:
while (os.canWriteCount() > 4KB) {
   os.write(my4KBbuffer);
   os.flush();
}
// write callback
while (os.canWriteCount() > 4KB) {
   os.write(my4KBbuffer);
   os.flush();
}
// write callback
while (os.canWriteCount() > 4KB) {
   os.write(my4KBbuffer);
   os.flush();
}
The flush will cause the actual write, since the Servlet layer has its
own buffer, I'm using it for clarity so that it is known when it occurs.
So when the write happens, a real non blocking write is also made. If
the bytes are fully written during that write operation, then the
os.canWriteCount() value will remain at its initial size and no callback
will occur. canWriteCount() does not have to correspond to a real buffer
size anyway, it is simply a maximum amount of data the application
should try to write and the container is ok to hold (like NIO 2 does)
for writing later.

NIO 2 style does the exact same thing, it keeps the user data and writes
it as soon as possible without blocking. The differences are:
- every write generates a callback
- the application has to allocate a buffer on each call since it has to
give its buffer to the container

Apparently, your issue with Rajiv's proposal is that it is possible to
implement it inefficiently, while your proposal is always inefficient.
Of course, if both were equally inefficient, then the NIO 2 API style is
simpler and should be chosen. But that's really not the case.

Last, and you may have missed it, it has been clarified that each of
these callbacks should have the full EE environment set. So in addition
to the trip to the thread pool, they will have a cost, and being able to
minimize them is good.

> > For whatever reason, you
> > apparently believe the possible cost of copying these bytes somewhere is
> > higher than a thread pool trip + container callback *on every single
> > write*.
>
> But it is the current API that is encouraging round trips to the
> thread pool because it will limit writes to the size of the internal
> implementation buffer, even if the network is able to accept more
> data. Application will only write the number of bytes indicated in
> the canWrite call back and will then have to wait for another dispatch
> cycle to call back the canWrite method to say that more bytes can be
> written.
>
> I'm advocating either:
>
> len = write(buffer,0,20000);
> // deal is len<20000 only when the network can't accept 20k
>
> or
>
> write(buffer,0,20000,handler);
>
>
> while the current API is encouraging:
>
> internalBuf.put(buffer,0,2000);
> internalBuf.flip();
> channel.write(internalBuf);
>
> // can write callback
> internalBuf.put(buffer,2000,2000);
> internalBuf.flip();
> channel.write(internalBuf);
>
> // can write callback
> internalBuf.put(buffer,4000,2000);
> internalBuf.flip();
> channel.write(internalBuf);
>
>
> ie it is the one that will have lots of unnecessary dispatches and
> threadpool round trips - unless the internal buffer is very large.
> etc. etc.

Only an inefficient container implementation would cause this behavior
(which is legal).

> > Personally, I also don't see any big benefit in knowing an amount of
> > bytes that can be written vs a simple boolean flag,
>
> So if you see no pros of that design, why are you trying to squash a
> discussion about the cons?
>
> I'm not saying the NIO2 style is a silver bullet. It does have it's
> issues, but since using it for a bit, they are not as bad as they
> first appeared (specially on the write side), and it does have
> usability benefits. But if we don't want to use it, then I'm saying
> we should go to a more traditional async IO API with simple
> readability/writaebility boolean indicators rather than try to invent
> a new style of predictive write sizes. IE I do not think that we
> are big enough geniuses to be inventing an entirely new style of async
> API - let's just use one of the known existing styles.

Simple read/write boolean flags are probably enough, and that's what I
have right now. My problem with your argumentation is that using an int
doesn't really change anything to efficiency (the boolean is actually
still there anyway if the application prefers it !), it simply gives an
extra information to the application if, as in my implementation, it
wants to avoid using leftovers (the extra copy you didn't like). So what
is the problem ?

-- 
Remy Maucherat <rmaucher_at_redhat.com>
Red Hat Inc