dev@grizzly.java.net

Re: [Fwd: grizzly versus mina]

From: Robert Greig <robert.j.greig_at_gmail.com>
Date: Sat, 26 May 2007 22:29:01 +0100

> > I suppose the advantage this brings is the reduction in context
> > switching. Although it must depend on where (i.e which thread) you are
> > interpreting bytes. With Qpid, the socket IO processors just read the
> > data and have no understanding of how many bytes are still to be read
> > to complete an individual command. When bytes are read up to maximum
> > frame size they are just passed to a per-connection queue, from where
> > worker threads pull them for decoding.
>
> The advantage is not only avoiding a context switch and also avoiding
> the enabling & disabling of interest ops.

Is registering a channel with a selector less expensive than changing
the interest ops of an already-registered channel?

> This is where I think we deviate from
> your implementation. I think you give the data to another thread pool
> to scan the bytes, right?

Yes, that is right.

> As we scan the bytes for the beginning and
> ending of (GIOP) messages we slice the really large ByteBuffer into GIOP
> PDU (protocol data units). But, it is not decoded. That GIOP PDU is
> then put on a worker thread pool where it is processed on its own worker
> thread, (actually it will further dispatch to another thread pool
> too).

Interesting. We could certainly try this approach too.

> So, I think where your implementation deviates from what we do in
> GlassFish IIOP / ORB is where we scan the bytes just read.

Yes.

One thing I measured was how often the case where we did not get a
complete PDU occurred. I imagine this would be heavily influenced by
the network and the particular test, but I found that in about 90% of
cases we receives "complete" PDUs.

> There's so many little nasty little "pot holes" to navigate through to
> make Java NIO perform really well. But, I suppose that's why we're so
> aggressively trying to make Grizzly available to anyone who'd like to
> use Java NIO and realize its potential.

Yes, a framework the has come from a lot of careful testing is
extremely valuable. I think that with this type of framework Java is
an excellent choice for writing network applications.

> Hope you don't mind me asking some additional questions on scatter
> writes as the day comes when I start looking at it again?

Sure, I'd be very interested if you can develop scenarios where
gathering writes give a performance win.

> That would be nice to have a maximum. Not all protocols give that
> definition.

Yes, we had the big advantage of defining the protocol as we created
the implementations.

> We tend to use the throughput collector with aggressive opts. More
> specifically a combination of -XX:+AggressiveOpts and
> -XX:+UseParallelOldGC, (note: old parallel gc also enables
> -XX:+UseParallelGC).

I haven't tried AggressiveOpts. I tried Googling it but didn't find
any detailed information - is there any documentation on exactly what
it does on 1.6?

> If you happen to have any benchmarks you'd feel comfortable sharing or
> contribution, we'd be glad to look at them. If they turn out to be
> good candidates for testing the Java SE builds against them and you'd be
> ok with us using them, I could probably get them integrated into the
> performance test suite that tests Java SE releases.

I'll certainly try to dig out the queue benchmark.

RG