for my purpose I want to start with a smaller buffer, and if needed, I'll
increase the buffer. but I don't want it infinite. I need to set a MAX,
suppose that the client go rampage and send junk.. I need to put a MAX.
One easy patch could be to use the MAX length from the start, but most of
the time it will be useless, and will waste memory.
Why I needed to do that is because if the case :
suppose the start buffer is 300 and the max 1000.
#1 - I receive a query of 250, no problem
#2 - I receive a query of 250, but the client is faster that the server and
send another query of 250, so the buffer will be 600 (suppose a double)
#3 - the client send garbage in a loop, need the close the connection after
1000.
the best could be to have a API.
createbuffer (300,1000,2) (start,max,increase factor) something like
that.. and we receive a exception when the max is reached.
oh ya.. need to put the buffer into the context.
what do you think ?
and for part2 do you have suggestions ?
public void sendToClient(StringBuffer sb) {
ByteBuffer writeBuffer = ByteBuffer.allocateDirect(sb.
toString().getBytes().length);
writeBuffer.put(sb.toString().getBytes());
writeBuffer.flip();
......
}
2008/11/18 Oleksiy Stashok <Oleksiy.Stashok_at_sun.com>
>
>> I think it's more appropriate in this mailing list.
>>> I have to snippet of codes. I,m looking for a way to optimize the
>>> ByteBuffer realocation if possible. *Part1 in ProtocolParser*
>>> // Check if buffer is full
>>> if (processingBuffer.remaining() ==
>>> processingBuffer.capacity()) {
>>> // If full - reallocate
>>> // but check if the max
>>> length is attain
>>> if(processingBuffer.capacity() +
>>> processingBuffer.remaining()<LIMITBB){
>>> ByteBuffer newBB =
>>> ByteBufferFactory.allocateView(
>>> processingBuffer.capacity() * 2,
>>> processingBuffer.isDirect());
>>> newBB.put(processingBuffer);
>>> processingBuffer = newBB;
>>> WorkerThread workerThread = (WorkerThread)
>>> Thread.currentThread();
>>> workerThread.setByteBuffer(processingBuffer);
>>>
>>
>> Why not using a ConcurrentLinkedQueue to store unused ByteBuffer instance?
>> That way you don't rely on the heap/native memory and instead you can re-use
>> the instance, or allocate larger byte buffer so you don't have re-allocate.
>>
> IMHO it's enough tricky to implement own memory management. One thing, if
> our implementation uses constant size ByteBuffer, but if not... It will
> mean, that ByteBuffer, we pool, may never be reused. Also we need to write
> additional code to control ConcurrentLinkedQueue size, calculate optimal
> size for newly allocated ByteBuffer etc... IMHO It's not very easy to
> implement.
> In Grizzly 2.0 we will have smart Buffer management implementation, and I'm
> very curious about benefit it will bring.
>
> Thanks.
>
> WBR,
> Alexey.
>
>
>
>>
>>
>> A+
>>
>> -- Jeanfrancois
>>
>>
>>
>>
>>
>>
>>
>> } else {
>>> s_logger.info <
>>> http://s_logger.info>("BUFFER
>>> max length was reached");
>>>
>>> processingBuffer.clear();
>>> maxBufferReached =
>>> true;
>>> return
>>> maxBufferReached;
>>> }
>>> }
>>> *Part 2 sending data to the clients*
>>> public class Client ....
>>> public void sendToClient(StringBuffer sb) {
>>> ByteBuffer writeBuffer =
>>> ByteBuffer.allocateDirect(sb.toString().getBytes().length);
>>> writeBuffer.put(sb.toString().getBytes());
>>> writeBuffer.flip();
>>> ......
>>> }
>>> }
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>
>