Hi,
Sorry, I'm not understanding... Let's talk about code, it's easier :-)
What I'm wondering is why the following exist:
1) TCPNIOUtils.java Line 230-246.
(
https://github.com/GrizzlyNIO/grizzly-mirror/blob/2.3.x/modules/grizzly/src/main/java/org/glassfish/grizzly/nio/transport/TCPNIOUtils.java#L230)
Because if a non-direct memoryManager has been chosen I'm not sure why
that choice needs to be overridden and a direct buffer used anyway as
an intermediate step.
2) DirectByteBufferRecord
(
https://github.com/GrizzlyNIO/grizzly-mirror/blob/2.3.x/modules/grizzly/src/main/java/org/glassfish/grizzly/nio/DirectByteBufferRecord.java#L54)
This is allocating direct buffers, and also caching them per-thread,
yet it's not a MemoryManager implementation, it's something different.
Is this just old/legacy?
Dan
On Mon, Dec 8, 2014 at 6:03 PM, Oleksiy Stashok
<oleksiy.stashok_at_oracle.com> wrote:
> Hi Daniel,
>
>
> On 08.12.14 09:32, Daniel Feist wrote:
>
>> I see there is a system property I can use to limit maximum size of
>> these direct bufffers and thus avoid the OutOfMemoryExceptions, but
>> I'm wondering why the MemoryManager is explicitlu being bypassed here
>> rather than simply being used? This also means there are two
>> allocations and reads per request and not just one. Can anyone shed
>> some light?
>
> Well, if you pass HeapByteBuffer to a SocketChannel - it'll do the same
> underneath - allocate (or take pooled) direct ByteBuffer and use it for
> reading.
> So we basically do the same in our code and passing direct ByteBuffer to a
> SocketChannel, so SocketChannel itself will not allocate direct ByteBuffer.
>
> This approach gives us one advantage - once we read to the direct ByteBuffer
> - we know the exact amount of bytes we need to allocate from the
> MemoryManager (no guessing).
>
> Hope it will help.
>
> WBR,
> Alexey.
>
> PS: Pls. give a shot to PooledMemoryManager, it can work with direct and
> heap buffers and it performed well on our tests.
>
>