On 15 September 2011 12:15, Shing Wai Chan <shing.wai.chan_at_oracle.com> wrote:
> Besides having a channel,
I think the intent is to avoid having a channel exposed. I think the
intent was also to avoid Buffer API too, but it is fit for purpose.
> the main difference with below is in
> ByteBufferenceReference.
> How do we know what ByteBuffer to allocate in this case?
The whole point is that you don't need the IO layer to know or care
about how the buffers are allocated, or even if they are partially
full of unprocessed data. The getBuffer() call might return a
reference to a previously allocated buffer, an idle buffer taken from
a pool or a freshly allocated one. The whole point is that the IO
layer should not hold a reference to the buffer while it is waiting to
be able to do the IO.
Imagine a HTTP server with 20,000 connections, all mostly idle. With
NIO.2, you would issue 20,000 reads with each passed a buffer and a
completion handler. You are unlikely to have a buffer less than 8k,
so you are looking at 156MB of buffers just for the idle state, which
is harmful for scalability (specially if they are kernel buffers).
With the API style proposed by Rajiv/Remy, this preallocation of
buffers is not an issue because the API is does not offer actual reads
and write, but instead only gives notifications of the ability to do
reads and writes, and it is the called back code that has to do the
actual IO operations. With NIO.2 Style the callbacks happen after
the IO operation completes - which is a significant simplification (as
you delegate the IO operations to the infrastructure), but the cost
currently is the preallocation of buffers. This cost might be
acceptable, but it is also simply avoidable by having the
ByteBufferReference, where getBuffer() is called by the infrastructure
only when it knows that it is able to proceed with a read or a write.
cheers