Salut,
Oleksiy Stashok wrote:
> Hi,
>
> I know the answer just on half of the question, will let Jean-Francois
> to reply on second one :)
>>
>
>> After looking at the code, I found received data was read into
>> WorkerThread-related ByteBuffer and thebuffer was cleared before next
>> reading.In other word, users, who use grizzly to develope network
>> program, need to manage the received data by themself.
> It's not completely true.
> In Grizzly we don't associate ByteBuffer with a channel BY DEFAULT.
> Which let's Grizzly better scale on bigger loads (consume less memory).
> But for sure if it's required, like during message parsing - we
> associate ByteBuffers with concrete channels. To be able to store half
> of message and reuse it next time, when other half will come. But again
> there is no need to perform that manipulation by hands -
> ParserProtocolFilter does it itself.
>
>
>> My question is whether grizzly provides the feature managing the
>> channel-specific buffer like what MINA provides.
>>
>> The paper 20070712_Grizzly_Architecture.pdf
>> <http://weblogs.java.net/blog/jfarcand/archive/20070712_Grizzly_Architecture.pdf>
>> told us that grizzly's performance was better than MINA.
>> Do you guys know what's version number of the framework the benchmark
>> test used?
>> and what resulted in the advantage?
> I'll let Jean-Francois to answer this.
The benchmark we did a long time ago was to run AsyncWeb on both Grizzly
and MINA, and measure the throughput an sclability. Now the MIA team has
made a lot of changed since that time, and Julien (who is on that list)
showed results where it seems MINA perform better than Grizzly, at least
for an echo test. I didn't had a chance to look at what the benchmark is
doing, but that can be a benchmark problem or those MINA folks improved
performance, which I suspect it is true :-) :-)
I still think Grizzly uses a couple of tricks that makes it faster than
MINA. But I need to work on a benchmark to demonstrate that, which I
will eventually this summer.
A+
-- jeanfrancois
>
> Thanks.
>
> WBR,
> Alexey.
>
>>
>>
>> Thank you,
>> -Oscar
>>
>