Hello,
I'm a newcomer to Grizzly and am currently experimenting with message
processing. I am using the GIOP example as a basis for this
(
http://grizzly.java.net/nonav/docs/docbkx2.0/html/coreframework-samples.html).
My use case is similar to the GIOP one, whereby messages have a header
and a variable length body.
When pushing large, bursty volumes of data to the application over TCP
I began to notice that messages towards the end of a burst would not
be processed *until* a trickle of messages followed. It appeared that
they were sitting in some buffer within Grizzly (I had confirmed via
tcpdump that they had actually been sent). Setting TCP_NODELAY on
client and server made no difference.
Debugging this further I found that the buffer being passed into
handleRead(...) was large enough to contain multiple messages. Based
upon the sample GIOP code, I see we split the buffer after the first
message, the first message is then processed immediately, and we
return the remainder for future processing. I had incorrectly assumed
that this remainder would be pushed immediately back into the
FilterChainContext for processing again. I later realised that this is
not the case (for good reason - we'd end up in an infinite loop
otherwise), and Grizzly waits for more data before calling
handleRead(...) again.
Anyway, the result of this is that if handleRead(...) in the example
is called with a buffer containing more than one message, then only
the first will be processed until some future data arrives (which may
never be the case). I fixed this by changing handleRead(...) so that
it loops over the buffer, processing multiple messages immediately and
leaving at most the remainder of a single message for the future.
Am I barking up the wrong tree, or is this a flaw in the example?
I've attached a modified handleRead function for the GIOPMessage
example (untested).
Thanks,
Sam