Hi Alexey,
It was very helpful.
I will email again after I roughly complete the memcached client.
Thanks!
Regards,
Bongjae Chang
On 1/3/12 7:24 PM, "Oleksiy Stashok" <oleksiy.stashok_at_oracle.com> wrote:
>Hi Bongjae,
>
>>
>> I am trying to support Memcached(http://memcached.org/) client based
>> on Grizzly.
>Great :)
>
>> While I considered several client connection models, I had a question.
>>
>> If the NIOConnection is shared among multi-threads, is it possible
>> that a client thread receives the proper reply corresponding to its
>> request?
>If correlation is not defined by the higher-level protocol - then not.
>
>>
>> If a user sends request1 to a server, the server should return
>> response1 and the client which has sent request1 to the server should
>> receive response1 properly(Request/Response should be pair).
>> When multi-threads write packets with a shared connection in
>> non-blocking mode, Grizzly will use the write-queue and the server may
>> send responses in order.
>>
>> But I think it is not sure that multi-threads of the client will get
>> the chance of getting the response in order.
>Grizzly core doesn't make any assumptions about how requests and
>response correlated.
>
>> For this model, I think packets should contain the sequence number or
>> id for retrieving proper response.
>Right, it should be solved on higher level, if protocol wants to support
>multiplexing.
>
>> In addition, it seems that Grizzly doesn't have the convenient API for
>> retrieving the reply except for write-result future like the following.
>>
>> For examples,
>> ---
>> future = connection.write(message);
>> response = future.get(Š); // WriteResult#getMessage() is not for
>> read-packet but written-packet.
>> ---
>> Any thoughts?
>Sure, the write(...) method API lets you check if specific write was
>completed or not.
>What you're looking for is more higher-level protocol specific, cause in
>general protocol could be
>1) one-way messages (1:0), where response is not expected at all;
>2) request/response (1:1), like in normal (not pipelined) HTTP and
>probably memcached;
>3) request/multiple-responses (1:many)
>etc...
>
>So it's a task for higher-level protocol implementation to build API
>you're looking for.
>For example Grizzly provider of async HTTP client, which Ryan was/is
>working on, implements this kind of API.
>
>>
>> Of course, I can consider another model if sharing the connection in
>> multi-threads is not supported easily in Grizzly. For examples, if a
>> client sends packets concurrently to the same server, the client
>> connects each connection or gets connections in connection pool.
>Again, we need to understand if higher-level protocol (memcached)
>supports this, in other words if it provides some request/response
>correlation mechanism.
>
>> I think that this is trade-off(using more connection resources for
>> scalability vs reusing only one connection for efficiency)
>I absolutely agree, usually if higher level protocol doesn't provide any
>built-in correlation support - you can try to implement connection cache
>with exclusive connection access. HTTP clients usually do that, maintain
>TCP connection cache, and once user wants to open HTTP connection -
>first we try to obtain TCP connection from cache, if there is no
>connection available - create a new one. Ones HTTP client is done with
>some HTTP connection - we try to return TCP connection back to the cache
>so it can be reused by another HTTP client connection.
>
>Hope this will help.
>
>WBR,
>Alexey.
>
>> I wonder I am understanding this correctly and would like to get your
>> advice.
>> If I am missing anything or Grizzly already has proper APIs for this,
>> please let me know.
>
>
>>
>> Thanks.
>>
>> Regards,
>> Bongjae Chang
>