users@grizzly.java.net

Re: who can help me?how a server base grizzly receive 2MB data from client

From: Jeanfrancois Arcand <Jeanfrancois.Arcand_at_Sun.COM>
Date: Wed, 06 Feb 2008 11:59:07 -0500

Hi,

windshome wrote:
> Thanks, My problem have been resolved.
>
> I re-create bytebuffer in WorkThreadImpl when the position of buffer==capity
> of buffer.
>
> When I send 3M or bigger request data, OOM occured, should I change the 4M
> real bytebuffer to largger?

Well, growing indefinitely is dangerous IMO. There is no way you can
stream the bytes or store them outside the BB. You really have to load
all the bytes in memory before doing something with them?

>
> And my idea is extend bytebuffer, but I have not found the method.So ,I
> reallocate a heap view and put the old buffer data in it,is there some
> method to directory buffer,avoid re allocate?

No they are not. If you need to do that, your should slice a large heap
ByteBuffer (so bytes are moved inside, not copied). Take a look at:

https://grizzly.dev.java.net/nonav/xref/com/sun/grizzly/util/ByteBufferFactory.html


>
> Another problem is I haven't found where can store my infomation of session.
> My Server and client protocol is :
>
> 1.Client open a socket
> 2.Client send hello requset
> 3.server receive hello,generate a 16 byte radom ,send random as challenge
> response to client
> 4.client receive challenge response, send real business requset, add the
> server challenge in request,and make a dagital signature of all request
> data,and send it all in same socket
> 5.server receive real request,verify digital signature,and check challenge
> of session
>
> all this step must in a same socket,when socket close,we must re-generate a
> new challenge.
>
>
> for this communication protocol, I must store the challenge in some session(
> a session is base a socket, not http session ),but I cannot find somewhere
> store it.Contex whill change in 2 times of protocol chain's execute,and
> workthread will change.
>
>
> in apache Mina,I find session, can i use something use as session?

Use the Context setAttribute/getAttribute (same as IoSession) and set
the appropriate AttributeHolder:

> 06 /**
> 207 * Return <code>AttributeHolder</code>, which corresponds to the
> 208 * given <code>AttributeScope</code>
> 209 *
> 210 * @param scope - <code>AttributeScope</code>
> 211 * @return - <code>AttributeHolder</code> instance, which contains
> 212 * <code>AttributeScope</code> attributes
> 213 */
> 214 public AttributeHolder getAttributeHolderByScope(AttributeScope scope){
> 215 AttributeHolder holder = null;
> 216 switch(scope) {
> 217 case REQUEST:
> 218 holder = this;
> 219 break;
> 220 case SELECTOR:
> 221 holder = selectorHandler;
> 222 break;
> 223 case CONTROLLER:
> 224 holder = controller;
> 225 break;
> 226 }
> 227
> 228 return holder;
> 229 }

See:

https://grizzly.dev.java.net/nonav/xref/com/sun/grizzly/Context.html

https://grizzly.dev.java.net/nonav/xref/com/sun/grizzly/TCPSelectorHandler.html

Mainly, based on the scope you what to store element, you can store it
at the Context or Selector on Controller level.


Thanks

-- Jeanfrancois


>
>
>
> Harsha Godugu wrote:
>> windshome wrote:
>>> when a message receive,we read it from a socket channel,but if we think
>>> all
>>> data isn't read completely, we return false ,and we will regist a read
>>> event
>>> in socket selector, we will read from the socketchannel use the buffer in
>>> old workthread.
>>>
>> correct.
>>> in my ProtocolParser I will not clear the buffer before read all data
>>> completely,so when buffer's postion equals buffer's limit, read from
>>> channel
>>> will return size 0.
>>>
>> aha... here comes the question, what do we do when you are expecting
>> more data and there is no-room (in the ByteBuffer, BB) to fill. This
>> can happen all the time when the application handles huge number of
>> bytes.. like 2+Mb data. Every protocol is different. I note that, your
>> application is having a custom protocol. So, this custom protocol should
>> have a way to handle the case of chunking / fragmentation. It should
>> tell the server/client how many bytes are coming still.. and when it's
>> done sending all.
>> It's in the protocol.
>>
>> Say for example, this case, Grizzly's buffer size is 8K. Now, we are
>> going to handle say, 2Mb from one end to the other (client/server).
>> Once you read say the first 8K bytes, you need to process them , or
>> store them, do-whatever, to empty the underlying buffer for the next
>> read. This is where the parser comes into picture. Ask the following
>> question.. what would be the optimal size of the buffer, to parse one
>> complete message. Based on the *optimal size* (during design), you can
>> set the BB size in Grizzly accordingly, before starting the Controller.
>> The goal is, as soon as we read "n number of bytes" equivalent to
>> constitute one single message, we give those read bytes to a parser for
>> further action. In some application, that action could be masking the
>> read bytes or alter every eleventh byte ..etc. etc..
>> Say, for example, some extreme case, your optimal size of BB is 1Mb,
>> then, that would be a bad design. Why? When doing i/o you want to do it
>> as fast as as you can considering network bandwidth etc etc.. the
>> application will run into OOMs and will bailout very fast. Too small
>> size will make the BB full occasionally and the parser can not parse
>> completely because, it still needs that one last byte which is expected
>> be red yet. During those times, you might consider recreating a new BB
>> (similar to realloc in C) for those the last bits.
>>
>> So, in a nut shell, it all depends on the Protocol, Parser and
>> ByteBuffer size.
>>
>> Good luck :-)
>>
>> -Harsha
>>
>>> if I save data in my context and clear the buffer, return false after
>>> clear
>>> buffer,it will correct?
>>>
>>> Harsha Godugu wrote:
>>>
>>>> windshome wrote:
>>>>
>>>>
>>>>> Oh,I think you are not understand me, my problem is when the bytebuffer
>>>>> is
>>>>> full,but all my data hasn't read
>>>>> complete(my data size is 2MB), how can i read all data?
>>>>>
>>>>>
>>>> Hi: Please note that the bytebuffer size that is used for i/o here, in
>>>> Grizzly, will have a constant threshold. That means, say, if the
>>>> application can send /receive data (on both server/client side), the
>>>> size of such a buffer during the process could vary from zero to the
>>>> threshold. This threshold has got a limit. It can be 8K, 16K or 256k to
>>>> the max... based on the system limits and application needs. It can not
>>>> be 8Mb or some gig.. The idea is, the bytebuffer that we use in
>>>> grizzly (for i/o) is really a BUFFER, meaning a temporary place to read
>>>> /write to/from a channel. That means, the underlying application needs
>>>> to store the flushed bits somewhere else. So, as soon as you read say
>>>> 8K bytes and then if expect more to read, we need store the read bytes
>>>> somewhere else (in some other object/file/shared memory..etc) and then
>>>> wait to read the rest of the data until your protocol handler tells you
>>>> that there are no next messages to parse!
>>>>
>>>> hths
>>>> -Harsha
>>>>
>>>>
>>>>> Jeanfrancois Arcand-2 wrote:
>>>>>
>>>>>
>>>>>
>>>>>> Salut,
>>>>>>
>>>>>> windshome wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>> I process the data to make a digital signature, and return signed
>>>>>>> data
>>>>>>>
>>> to
>>>
>>>>>>> client.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> Interesting :-) The number of ByteBuffer created is equal to the
>>>>>> number
>>>>>> of active Threads by default in Grizzly. Right now our default thread
>>>>>> pool doesn't purge inactive threads, so it may or may not be a
>>>>>> problem
>>>>>> if you need a lot of threads. You might want to replace the default
>>>>>> thread pool with a one from java.util.concurrent.* that can purge
>>>>>> inactive thread and their associated byte buffer.
>>>>>>
>>>>>> Are you able to determine the size of your expected traffic? If your
>>>>>> VM
>>>>>> is properly tuned (I will let Charlie gives some hints in case you are
>>>>>> interested) it shouldn't be a problem, assuming you don't need 1000
>>>>>> threads :-)
>>>>>>
>>>>>> A+
>>>>>>
>>>>>> -- Jeanfrancois
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Jeanfrancois Arcand-2 wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> sorry for the delay...
>>>>>>>>
>>>>>>>> windshome wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> I write a server base grizzly,I write a client which send data to
>>>>>>>>> server
>>>>>>>>> and
>>>>>>>>> receive response from server. I send 50 bytes data and 2k data to
>>>>>>>>> server
>>>>>>>>> ok,but when I send 20K and 2MB data,server no response.
>>>>>>>>> I view the code of grizzly,find the ByteBuffer of a workthread
>>>>>>>>> capity
>>>>>>>>> is
>>>>>>>>> 8192,then I modify it to 81920,then 20k data can receive by server.
>>>>>>>>>
>>>>>>>>> If my Server set the init size of ByteBuffer is 81920,I think it
>>>>>>>>> would
>>>>>>>>> use
>>>>>>>>> to much memory,who can tell me some method,will dynamic just the
>>>>>>>>> buffer,can
>>>>>>>>> receive some bytes data ,or some MB data?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>> What are you doing with the bytes? Are you saving the bytes
>>>>>>>> somewhere
>>>>>>>>
>>> on
>>>
>>>>>>>> disk/db (freeing your memory) or you must keep them in memory?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> -- Jeanfrancois
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> My Server's protocol parser code :
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> final ProtocolFilter parserProtocolFilter = new
>>>>>>>>> ParserProtocolFilter() {
>>>>>>>>> public ProtocolParser newProtocolParser() {
>>>>>>>>> return new ProtocolParser() {
>>>>>>>>> private boolean isExpectingMoreData = false;
>>>>>>>>> private ByteBuffer byteBuffer;
>>>>>>>>> private Request message;
>>>>>>>>>
>>>>>>>>> public boolean hasMoreBytesToParse() {
>>>>>>>>> return false;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> public boolean isExpectingMoreData() {
>>>>>>>>> return isExpectingMoreData;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> public Object getNextMessage() {
>>>>>>>>> return message;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> public boolean hasNextMessage() {
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ByteBuffer dup = byteBuffer.duplicate();
>>>>>>>>> System.out.println(
>>>>>>>>> "byteBuffer.position2()="
>>>>>>>>> +
>>>>>>>>> byteBuffer.position() );
>>>>>>>>> if (byteBuffer.position() == 0){
>>>>>>>>>
>>>>>>>>> System.out.println("byteBuffer.position()
>>>>>>>>> ==
>>>>>>>>> 0");
>>>>>>>>> isExpectingMoreData = true;
>>>>>>>>> return false;
>>>>>>>>> }
>>>>>>>>> dup.flip();
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> if( dup.remaining()< 4 ){
>>>>>>>>> isExpectingMoreData = true;
>>>>>>>>> return false;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> byte[] bs = new byte[4];
>>>>>>>>> dup.get(bs);
>>>>>>>>> int len= (bs[0] << 24) + (bs[1] << 16) +
>>>>>>>>> (bs[2]
>>>>>>>>> <<
>>>>>>>>> 8) + (bs[3] << 0);
>>>>>>>>>
>>>>>>>>> if( dup.remaining() < len ){
>>>>>>>>> isExpectingMoreData = true;
>>>>>>>>> return false;
>>>>>>>>> }
>>>>>>>>> byte[] data = new byte[len];
>>>>>>>>> dup.get( data );
>>>>>>>>>
>>>>>>>>> try {
>>>>>>>>> message = new ByteRequest(data);
>>>>>>>>> } catch (Exception e) {
>>>>>>>>> e.printStackTrace();
>>>>>>>>> message=null;
>>>>>>>>> return false;
>>>>>>>>> }
>>>>>>>>> return true;
>>>>>>>>>
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> public void startBuffer(ByteBuffer bb) {
>>>>>>>>> byteBuffer = bb;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> public boolean releaseBuffer() {
>>>>>>>>> byteBuffer = null;
>>>>>>>>> message = null;
>>>>>>>>> return false;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> };
>>>>>>>>> }
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>>>>>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>>>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>