Hi,
>
> By the simple Echo example do you mean
> org.glassfish.grizzly.samples.echo.EchoFilter? If so, I can see why
> that
> works and why I'm running into this issue:
>
> The EchoFilter uses the StreamWriter.writeStream(StreamReader)
> method on
> handleRead. This method uses the StreamReader.getBuffer() and
> StreamReader.finishBuffer() methods and disposes of the echoed back
> buffers
> itself.
Right.
> In my case, I've implemented a InputStream and OutputStream interface
> (attached) on top of StreamReader and StreamWriter for simplicity.
> This is
> then used by the ASN.1 decoder and encoder (also attached.) My
> filter code
> reads as many ASN.1 elements (LDAP messages) out of the ASN.1
> decoder as
> possible:
>
> // Decode all complete elements from the last read
> while (asn1Reader.elementAvailable())
> {
> processLDAPMessage(LDAPReader.readMessage(asn1Reader));
> }
ok.
>
>
> Since our clients are sending very small LDAP request messages
> synchronously, there is only one message and it will fit in one read
> buffer.
> I've checked and verified that the ASN.1 decoder read all bytes out
> of the
> read buffer.
>
> StreamReader does not dispose of a exhausted current read buffer
> when the
> read size is exactly the remaining size. It only disposes of it on any
> additional reads after that. Is this right?
Absolutely. Now I got it.
Looks like we need to fix this in StreamReader.
> On second thought, I should
> think about implementing a LDAP/ASN.1 parser that works directly with
> Grizzly buffers.
It's possible, though StreamReader was specifically defined to avoid
that. At least for most of scenarios.
> Another question: Is it correct that the DefaultMemoryManager's
> thread local
> pool buffer size is always the first allocate request size? Thus, in
> the
> case of read buffers, it can only allocate one non-empty read buffer
> at a
> time before falling back to the ByteBufferViewManager (shared pool)
> right?
Can you pls. provide more details here?
Thank you!
WBR,
Alexey.
>
>
> Thanks again for all your help
> Bo
>
>
> On 5/13/09 2:27 PM, "Oleksiy Stashok" <Oleksiy.Stashok_at_Sun.COM> wrote:
>
>> Hi Bo,
>>
>> can you pls. share the sources, especially your filter code?
>> Cause, if DefaultMemoryManager always allocates buffers from shared
>> Pool - it's bad.
>> Simple Echo example, which I use for perf. tests, doesn't have this
>> issue. But for sure it could be something I missed.
>>
>> Thank you.
>>
>> WBR,
>> Alexey.
>>
>> On May 13, 2009, at 12:42 , Bo Li wrote:
>>
>>> Hello Grizzly Gurus
>>>
>>> After quickly prototyping an OpenDS front-end connection handler to
>>> use
>>> Grizzly, I'm encountering some performance issues. All I have is a
>>> default
>>> TCPNIOTransport with the TransportFilter and a custom LDAPFilter.
>>> The
>>> LDAPFilter uses the readByte and readByteArray methods of
>>> StreamReader to
>>> decode the request.
>>>
>>> With 16 clients doing IO on persistent connections, I see many
>>> Grizzly
>>> worker threads blocked on synchronously allocating read buffers from
>>> the
>>> ByteBufferViewManager (the stack traces are at the end of this
>>> email.) I was
>>> surprised as our LDAP messages are very small and always fit in one
>>> read
>>> buffer. I decreased the read buffer size from the default transport
>>> wide 64K
>>> to 4K and saw a performance improvement. However, the
>>> DefaultMemoryManager
>>> was still allocating out of the ByteBufferViewManager.
>>>
>>> I thought the DefaultMemoryManager should be able to handle 99% of
>>> all
>>> allocations out of the thread local pool if I empty out the read
>>> buffer(s)
>>> on every handleRead. However, it seems like the TCPNIOStreamReader
>>> hangs on
>>> to the empty buffer (TCPNIOStreamReader.current) even after
>>> handleRead is
>>> finished. Thus, the next time TCPNIOTranportFilter.handleRead is
>>> called
>>> again for the next client request, it calls TCPNIOStreamReader.read0
>>> which
>>> tries to allocate a new read buffer... even though the current one
>>> is empty.
>>> When DefaultMemoryManager tries to satisfy the allocation request
>>> out of the
>>> thread local pool, there is not enough remaining from the previous
>>> read
>>> buffer allocation and falls back to the synchronized
>>> ByteBufferViewManager.
>>> In fact the DefaultMemoryManager will always fail this allocation
>>> because
>>> the thread local pool is only ever as big as the read buffer size.
>>> What's
>>> left after the trim from the previous read is always smaller than
>>> the read
>>> buffer size.
>>>
>>> I guess I could avoid this by handling the read buffer management
>>> myself by
>>> using the readBuffer, getBuffer, and finishBuffer methods in
>>> StreamReader...
>>> although it is not ideal. Anyways, Is this behavior by design or
>>> just
>>> "features" of a milestone build?
>>>
>>> Thanks again
>>> Bo
>>>
>>> "Grizzly-WorkerThread(4)" prio=3 tid=0x0000000004c80000 nid=0x52
>>> waiting for
>>> monitor entry [0xfffffd77bf2c3000..0xfffffd77bf2c3b20]
>>> java.lang.Thread.State: BLOCKED (on object monitor)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.ByteBufferViewManager.allocate(ByteBufferViewMa
>>> nager.java:98)
>>> - waiting to lock <0xfffffd786a796300> (a
>>> org.glassfish.grizzly.memory.DefaultMemoryManager)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:138)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:50)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.streams.AbstractStreamReader.newBuffer(AbstractStreamR
>>> eader.java:638)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOStreamReader.read0(TCPNIOStreamRea
>>> der.java:171)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOT
>>> ransportFilter.java:72)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.TransportFilter.handleRead(TransportFilter
>>> .java:67)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain
>>> $3.execute(DefaultFilter
>>> Chain.java:88)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.executeChain(DefaultFil
>>> terChain.java:238)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterCh
>>> ain.java:177)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.AbstractFilterChain.process(AbstractFilter
>>> Chain.java:148)
>>> at
>>> org.glassfish.grizzly.ProcessorRunnable.run(ProcessorRunnable.java:
>>> 232)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:
>>> 441)
>>> at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.runTask(ThreadPoolExecutor.ja
>>> va:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.run(ThreadPoolExecutor.java:9
>>> 08)
>>> at java.lang.Thread.run(Thread.java:619)
>>>
>>> "Grizzly-WorkerThread(3)" prio=3 tid=0x00000000022b0c00 nid=0x51
>>> waiting for
>>> monitor entry [0xfffffd77bf3c4000..0xfffffd77bf3c4aa0]
>>> java.lang.Thread.State: BLOCKED (on object monitor)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.ByteBufferViewManager.allocate(ByteBufferViewMa
>>> nager.java:98)
>>> - waiting to lock <0xfffffd786a796300> (a
>>> org.glassfish.grizzly.memory.DefaultMemoryManager)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:138)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:50)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.streams.AbstractStreamReader.newBuffer(AbstractStreamR
>>> eader.java:638)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOStreamReader.read0(TCPNIOStreamRea
>>> der.java:171)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOT
>>> ransportFilter.java:72)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.TransportFilter.handleRead(TransportFilter
>>> .java:67)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain
>>> $3.execute(DefaultFilter
>>> Chain.java:88)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.executeChain(DefaultFil
>>> terChain.java:238)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterCh
>>> ain.java:177)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.AbstractFilterChain.process(AbstractFilter
>>> Chain.java:148)
>>> at
>>> org.glassfish.grizzly.ProcessorRunnable.run(ProcessorRunnable.java:
>>> 232)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:
>>> 441)
>>> at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.runTask(ThreadPoolExecutor.ja
>>> va:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.run(ThreadPoolExecutor.java:9
>>> 08)
>>> at java.lang.Thread.run(Thread.java:619)
>>>
>>> "Grizzly-WorkerThread(1)" prio=3 tid=0x0000000001ca5800 nid=0x4f
>>> runnable
>>> [0xfffffd77bf5c6000..0xfffffd77bf5c69a0]
>>> java.lang.Thread.State: RUNNABLE
>>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
>>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.ByteBufferManager.allocate0(ByteBufferManager.j
>>> ava:81)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.ByteBufferViewManager.allocate(ByteBufferViewMa
>>> nager.java:99)
>>> - locked <0xfffffd786a796300> (a
>>> org.glassfish.grizzly.memory.DefaultMemoryManager)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:138)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:50)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.streams.AbstractStreamReader.newBuffer(AbstractStreamR
>>> eader.java:638)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOStreamReader.read0(TCPNIOStreamRea
>>> der.java:171)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOT
>>> ransportFilter.java:72)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.TransportFilter.handleRead(TransportFilter
>>> .java:67)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain
>>> $3.execute(DefaultFilter
>>> Chain.java:88)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.executeChain(DefaultFil
>>> terChain.java:238)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterCh
>>> ain.java:177)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.AbstractFilterChain.process(AbstractFilter
>>> Chain.java:148)
>>> at
>>> org.glassfish.grizzly.ProcessorRunnable.run(ProcessorRunnable.java:
>>> 232)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:
>>> 441)
>>> at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.runTask(ThreadPoolExecutor.ja
>>> va:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.run(ThreadPoolExecutor.java:9
>>> 08)
>>> at java.lang.Thread.run(Thread.java:619)
>>>
>>> "Grizzly-WorkerThread(0)" prio=3 tid=0x0000000001ca4800 nid=0x4e
>>> waiting for
>>> monitor entry [0xfffffd77bfbcc000..0xfffffd77bfbcc920]
>>> java.lang.Thread.State: BLOCKED (on object monitor)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.ByteBufferViewManager.allocate(ByteBufferViewMa
>>> nager.java:98)
>>> - waiting to lock <0xfffffd786a796300> (a
>>> org.glassfish.grizzly.memory.DefaultMemoryManager)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:138)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.memory.DefaultMemoryManager.allocate(DefaultMemoryMana
>>> ger.java:50)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.streams.AbstractStreamReader.newBuffer(AbstractStreamR
>>> eader.java:638)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOStreamReader.read0(TCPNIOStreamRea
>>> der.java:171)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOT
>>> ransportFilter.java:72)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.TransportFilter.handleRead(TransportFilter
>>> .java:67)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain
>>> $3.execute(DefaultFilter
>>> Chain.java:88)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.executeChain(DefaultFil
>>> terChain.java:238)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterCh
>>> ain.java:177)
>>> at
>>> org
>>> .glassfish
>>> .grizzly.filterchain.AbstractFilterChain.process(AbstractFilter
>>> Chain.java:148)
>>> at
>>> org.glassfish.grizzly.ProcessorRunnable.run(ProcessorRunnable.java:
>>> 232)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:
>>> 441)
>>> at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.runTask(ThreadPoolExecutor.ja
>>> va:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor
>>> $Worker.run(ThreadPoolExecutor.java:9
>>> 08)
>>> at java.lang.Thread.run(Thread.java:619)
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>
>
> <
> StreamReaderInputStream
> .java
> >
> <
> StreamWriterOutputStream
> .java
> >
> <
> ASN1InputStreamReader
> .java
> >
> <
> ASN1OutputStreamWriter
> .java
> >
> <
> LDAPReader
> .java
> >---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: users-help_at_grizzly.dev.java.net