Hi Bo,
>>>>> 1. Some apparent leaks in Grizzly ( more details below)
>>>>>
>>>>> 2. Encoder fixes the memory leak in the TrimAwareBuffer and
>>>>> allows it to get disposed. However this is not much useful. We
>>>>> are using a custom thread for writing the data and that thread
>>>>> uses the encoder. The custom thread
>>>>> never uses the cache so each request turns up creating a new
>>>>> buffer. So fixing encoder doesn't have much of benefit.
>>>> Can your custom thread extends DefaultWorkerThread? If not - we
>>>> will probably need to thing about some API changes in Grizzly
>>>> worker threads.
>>>
>>> This is impossible as our SDK could be used by any application.
>>> The end user have control of the threads they use to write LDAP
>>> requests and it would be nasty to expose Grizzly at that level.
>>> Should I use the SlabMemoryAllocator? It works with non Grizzly
>>> threads.
>> I didn't test it for ages, at some point of time we will need to
>> revisit it.
>>
>>> Would it be better to just use AttributeStorage to cache the
>>> ThreadLocalPool and remove the dependency on extending
>>> DefaultWorkerThread?
>> Well, it's just implementation detail. The question, do we really
>> want any thread to be able to cache buffers? Not sure... at least
>> by default IMO it should be not enabled, but it's possible to
>> create some flag in the DefaultMemoryManager like
>> supportCustomThreadCache() or something like that.
> I found another way around it by using ThreadCaches to store small
> encoding buffers. Seems to work well.
Agree, for the write phase it would work well.
>>> I could also just dispose of it in the transformer when its all
>>> used up. However, this poses a problem for partial reads. If every
>>> PDU of the protocol straddles between two reads, dispose will
>>> never be called even though some of the bytes are already read.
>>> How should this case be handled?
>> Can you pls. elaborate?
>
> Well, imagine each PDU is 700 bytes. However, when sending the PDUs,
> the server breaks them up into 400 byte chunks. On the first
> invocation to the decoder transformer, an IncompletedResult will be
> returned with the 400 read bytes as the remainder. On the next read
> of 400 bytes, this buffer is going to be appended to the previous
> remainder buffer (as an ByteBuffersBuffer). On the second invocation
> to the decoder transformer, an CompleteResult will be returned with
> 100 bytes left in the buffer as the remainder. Since the PDU size
> and chunk size is not divisible without a 0 remainder, the buffer
> will never be disposed. This causes the ByteBuffersBuffer to grow
> infinitely as each additional read chunk is appended without
> removing the already decoded bytes. This will eventually cause an
> OOME when reading a large stream of data.
>
> Is this assessment correct? ActiveDirectory does something like this
> and I saw the buffer grow in the debugger. How should we handle this
> case?
When you return remainder in your Decoder - you may explicitly call
buffer.disposeUnused() (I have just renamed it, so you need to update
from svn) to release Buffer's unused space.
Hope this will help.
WBR,
Alexey.
>
> Thanks again
> Bo
>
>>
>> Thanks.
>>
>> WBR,
>> Alexey.
>>
>>>
>>> Thanks
>>> Bo
>>>
>>>>
>>>>
>>>>> Some apparent leaks
>>>>> ---------------------------
>>>>>
>>>>> a. TransportFactory.java
>>>>>
>>>>> /**
>>>>> * Initialize default factory settings.
>>>>> */
>>>>> public void initialize() {
>>>>> defaultAttributeBuilder =
>>>>> Grizzly.DEFAULT_ATTRIBUTE_BUILDER;
>>>>> defaultMemoryManager = new DefaultMemoryManager();
>>>>>
>>>>> ((DefaultMemoryManager)defaultMemoryManager).setMonitoring(true);
>>>>> // defaultWorkerThreadPool =
>>>>> GrizzlyExecutorService.createInstance(); ---> No body uses
>>>>> this, why to create? It creates 5 threads which don't seem to be
>>>>> doing anything.
>>>>> }
>>>> I was thinking about a way how different transports may share the
>>>> same thread pool. I'll update this. Thanks!
>>>>
>>>>
>>>>> b. You have a bug in the ThreadCache.java
>>>>>
>>>>>
>>>>> public static <E> E takeFromCache(CachedTypeIndex<E> index) {
>>>>> final Thread currentThread = Thread.currentThread();
>>>>> if (currentThread instanceof DefaultWorkerThread) {
>>>>> return ((DefaultWorkerThread)
>>>>> currentThread).takeFromCache(index);
>>>>> } else {
>>>>> final ObjectCache genericCache =
>>>>> genericCacheAttr.get();
>>>>> if (genericCache != null) {
>>>>> genericCache.get(index); //should return here.
>>>>> }
>>>>>
>>>>> return null;
>>>>> }
>>>>> }
>>>>>
>>>>> If that's not true then there is no point in keeping the
>>>>> disposed TrimAwareBuffer in the object cache ( for custom
>>>>> threads) when you don't reuse it.
>>>> it's mistype I think - will fix it. Thanks!
>>>>
>>>>
>>>>> c. Perhaps clearing the buffer can help during dispose may help
>>>>> GC :
>>>>>
>>>>> ByteBufferManager.java
>>>>>
>>>>> public void releaseByteBuffer(ByteBuffer byteBuffer)
>>>>> {
>>>>> byteBuffer.clear();
>>>>> }
>>>> Not sure. Clear just changes pos/lim of the bytebuffer.
>>>>
>>>>
>>>>> d. I just happened to see that you did some optimization in the
>>>>> DefaultMemoryManager so I may come back to that later.
>>>> Sure, will appreciate that!!!
>>>>
>>>> BTW, you can send this comments on dev_at_grizzly mailing list. It
>>>> will add some Grizzly 2.0 visibility.
>>>>
>>>> Thank you!
>>>>
>>>> WBR,
>>>> Alexey.
>>>>
>>>>>
>>>>>
>>>>> - Kunal
>>>>>
>>>>> On 3/9/2010 10:57 AM, Oleksiy Stashok wrote:
>>>>>>
>>>>>>>>> What is the difference between calling allowBufferDispose
>>>>>>>>> and dispose?
>>>>>>>> dispose() blindly disposes the buffer.
>>>>>>>> tryDispose() disposes buffer, only if it was marked as
>>>>>>>> allowDispose(true);
>>>>>>>>
>>>>>>>> So allowDispose is used for a non-forced buffer dispose.
>>>>>>>>
>>>>>>> So if I call allowDispose, who will actually dispose of the
>>>>>>> buffer in this case?
>>>>>> Someone, who will call buffer.tryDispose();
>>>>>> I've defined one place in Grizzly, which calls it - it's
>>>>>> writer, when it completes writing some buffer to a channel.
>>>>>>
>>>>>>
>>>>>>> Looking at the String Encoder/Decoder classes confuses me even
>>>>>>> more now. :-[
>>>>>>>
>>>>>>> StringEncoder: you allocate a buffer, encode the message
>>>>>>> (String), and then call allowBufferDispose on it *before*
>>>>>>> passing the buffer in the completed result:
>>>>>>> final Buffer output =
>>>>>>>
>>>>>>> obtainMemoryManager
>>>>>>> (storage).allocate(byteRepresentation.length + 2);
>>>>>>>
>>>>>>> // write data to output...
>>>>>>>
>>>>>>> output.allowBufferDispose(true);
>>>>>>>
>>>>>>> final TransformationResult<String, Buffer> result =
>>>>>>> TransformationResult.<String,
>>>>>>> Buffer>createCompletedResult(
>>>>>>> output, null, false);
>>>>>>>
>>>>>>> StringDecoder: you are given a buffer from which you decode
>>>>>>> the message (String), but you never dispose of the buffer. I
>>>>>>> thought that you were saying that the passed in Buffer should
>>>>>>> be disposed or marked for disposing if the decoder empties the
>>>>>>> buffer?
>>>>>>> I am clearly totally confused! :-)
>>>>>> I see. Let me explain this...
>>>>>> I think the entire confusion comes from fact, that in Grizzly
>>>>>> (for now) it's possible to use Transformers outside the
>>>>>> Filters, like:
>>>>>>
>>>>>> public static void main(...) {
>>>>>> Buffer encodedString = memoryManager.allocate(...);
>>>>>>
>>>>>> ....... Fill encodedString with the data ......
>>>>>>
>>>>>> StringDecoder decoder = new StringDecoder();
>>>>>> TransformationResult result = decoder.transform(state,
>>>>>> encodedString);
>>>>>> ........... Check result do something with it ..............
>>>>>>
>>>>>> ....... reuse encodedString buffer ..............
>>>>>> }
>>>>>>
>>>>>> as you see, if I use decoder outside the filter - I can provide
>>>>>> some buffer and then decide to reuse it, and if Transformer
>>>>>> will dispose the buffer - it will be unexpected situation for me.
>>>>>>
>>>>>> But probably I'll be able to find a solution on a
>>>>>> AbstractCodecFilter level, so that Transfomer will not dispose
>>>>>> a buffer, but AbstractCodecFilter will.
>>>>>>
>>>>>> If you have better ideas - please let me know.
>>>>>>
>>>>>> WBR,
>>>>>> Alexey.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Matt
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>