This is my understanding of how DefaultMemoryManager works: The first time a
worker thread calls TCPNIOStreamReader.read0,
DefaultMemoryManager.BufferInfo for the thread is null so it falls back to
allocate from the shared pool. When this buffer is later trimmed after the
read, it is stored in the BufferInfo for this worker thread as the thread
local pool (DefaultMemoryManager.TrimAwareWrapper.trim). The remaining space
after the trim is available for further allocations from this worker thread
w/o falling back to the shared pool again.
However, any further reads will try to allocate a read buffer the same size
as the last allocation which will always be larger than the remaining
space available after the trim. Thus in essence, DefaultMemoryManager will
allocate all further read buffers after the first allocation out of the
shared pool. Is this the correct behavior?
For our case, it seems like it would be more efficient to have a thread
local pool buffer size at least twice that of the read buffer size. This
will allow multiple read buffer allocations out of the thread local pool in
the case of reading small request messages.
Thanks
Bo
On 5/13/09 4:23 PM, "Oleksiy Stashok" <Oleksiy.Stashok_at_Sun.COM> wrote:
>> Another question: Is it correct that the DefaultMemoryManager's thread local
>> pool buffer size is always the first allocate request size? Thus, in the
>> case of read buffers, it can only allocate one non-empty read buffer at a
>> time before falling back to the ByteBufferViewManager (shared pool) right?
> Can you pls. provide more details here?