Hi,
I'm deliberatly using HeapMemoryManager with Grizzly (for now) to
avoid any potential memory issues via the use of off-heap memory.
That said, when running some performance tests I'm seeing OutOfMemory
errors due to DirectByteBuffer usage in
org.glassfish.grizzly.nio.transport.TCPNIOUtils.allocateAndReadBuffer.
(the connection returns a read buffer size of 12M, so with some
concurrency this seems becomes a big deal).
I see there is a system property I can use to limit maximum size of
these direct bufffers and thus avoid the OutOfMemoryExceptions, but
I'm wondering why the MemoryManager is explicitlu being bypassed here
rather than simply being used? This also means there are two
allocations and reads per request and not just one. Can anyone shed
some light?
thanks,
Dan