On 13/06/07, D. J. Hagberg (Sun) <Dj.Hagberg_at_sun.com> wrote:
> It seems to me that tying up precious WorkerThread resources waiting for
> synchronous write's to complete (where one is not bound by the
> javax.servlet specification) is a bit wasteful and limiting in
> scalability, especially if I have a client on the other end of a slow
> connection, but I don't have any hard numbers to back me up on this.
I work on the Apache Qpid project, which is an implementation of the
AMQP protocol (for message oriented middleware). We don't use Grizzly
yet but I have spent a lot of time looking at this area for Qpid.
What is your primary goal? Is it to maximise the number of connections
that can be serviced concurrently or to maximise the data throughput
to clients? Also, is it important for clients to be able to read data
as they are sending it or is it more like http where they send then
receive?
With Qpid we were trying to maximise the throughput, while servicing a
reasonable number of connections (e.g. 2000 connections not 20,000).
It was also extremely important that we could write responses to
client at the same time that we read from them.
Are you in control of the client code as well as the server? One thing
we had to consider was memory usage - i.e. ensuring that the queues
did not get too big either due to badly written or just slow clients.
> The other possibility would seem to be to *only* register an OP_WRITE
> interest when there are messages in the outbound queue that need to be
> written, triggered whenever a message is added to the outbound queue.
> But in this case, there is an expense involved with waking up the
> selector thread and registering/updating selection keys, etc.
In our design, we have several I/O threads, half of which are
responsible for reading and half for writing. Connections are assigned
a thread on a round robin basis.
There is certainly an expense involved with waking up the thread.
However, OP_WRITE interest only needs to be registered when the kernel
buffer is full, and this is obviously partly dependent on how quickly
your clients are processing data.
The first design we had, threads were responsible for both reads and
writes but we wanted to be able to read and write from a socket at the
same time. We found it significantly reduced the memory usage of our
app since the build-up in the queues was far lower.
Sorry for providing more questions than answers but I think this is
such a "delicate" area.
RG