dev@grizzly.java.net

Re: Initial commit of grizzly-memcached

From: Bongjae Chang <bongjae.chang_at_gmail.com>
Date: Mon, 30 Jan 2012 01:03:08 +0900

Hi Alexey,

Thank you for reviewing sources.

I agree with your opinions completely.

About 1)
I compensated the defect with AtomicBoolean(shutdown flag). I think it is
not still perfect because methods doesn't have any locks or synchronized
blocks. But I expect that the shutdown method and the addCache method which
will be called by cache's Builder#build() will be rarely executed
concurrently.

About 2)
Fixed them.

About 3)
I missed an important point. I also fixed it. Thank you for reminding me
about that!

About 4)
Right. Currently, we can't write to the same connection from multiple
threads. A cache operation(get(), set(), getMulti() and etc...) will be
executed in a only different connection which is managed by the connection
pool(BaseObjectPool). But if we can share the connection with many
operations in multi threads, we can optimize more.
For examples, Spymemcached and Xmemcached clients optimized continuous
get-operations into getMulti operations automatically.

(write queue)
get(1) , get(2) , get(3) , set(4) , get(5) ===optimization===>
getMulti(1,2,3) -> set(4) -> get(5)

Maybe we will be able to improve them after 2.2.2 release as your words. I
will file this issue in JIRA later.

Currently, I am benchmarking(not finished) grizzly-memcached with other
clients.
In 32bytes value, it seemed that results were good.
But in 128bytes value, I met a problem. Spymemcached's TPS was very high
though Memcached server's cpu and network traffic were very low(I can't
understand the result yet and I am investigating it now).

After I finish the benchmark, I will share completed results later.

Alexey wrote:
> In you handleWrite method, I noticed optimization flag to switch from single
to composite buffer usage. Just curios, do you have any numbers to share, which
approach works better in which conditions?

In simple local tests, single buffer was better than composite buffer in
BasicBenchmark#testBenchmarkingGetMultiCommand().
It seemed that single buffer had advantage in single thread or a few
threads. But I am not sure yet. :)

Here are results(TPS) with 500 keys. (I repeated 1~4 tests)

<other clients have no set-multi operation>
<spymemcached and xmemcached use only one connection in multi threads so I
also tested them with connection per thread for fairness(spymultimemcached,
xmultimemcached)>

  
* thread 1, value 32bytes, loop 1000
(get)
grizzlymemcached: 10346, 10676, 10772
spymemcached: 9138, 9312, 9485
spymultimemcached: -
javamemcached: 14349, 14288, 14271
xmemcached: 9214, 8778, 8838
xmultimemcached: -

(get-multi)
grizzlymemcached(single): 230414, 230308, 228728
grizzlymemcached(composite): 180897, 181620
spymemcached: 210970, 211237, 212765
javamemcached: 200722, 200964, 200160
xmemcached: 206100, 207727, 206015

(set)
grizzlymemcached: 10192, 10320, 10281
spymemcached: 9453, 9465, 9573
spymultimemcached: -
javamemcached: 14123, 14135, 14072
xmemcached: 8867, 8616, 8610
xmultimemcached: -

(set-multi)
grizzlymemcached(single): 221631, 221926, 220848
grizzlymemcached(composite): 195771, 195848


* thread 50, value 32bytes, loop 1000
(get-multi)
grizzlymemcached(single): 1233654, 1248190, 1239464, 1243100
grizzlymemcached(composite): 1553856, 1544926
spymemcached: 1346184, 1346765, 1309243, 1349309
spymultimemcached: 1389352, 1405283, 1413547, 1390511
javamemcached: 1123292, 1109631, 1130301
xmemcached: 208937, 209557
xmultimemcached: 214734, 208309

(set-multi)
grizzlymemcached(single): 802310, 805542
grizzlymemcached(composite): 863796, 863140


* thread 200, value 32bytes, loop 250
(get-multi)
grizzlymemcached(s): 1841078, 1832038, 1837019
grizzlymemcached(c): 1827618, 1820962

spymemcached: 1318495, 1324292, 1334329, 1320864
spymultimemcached: 1323801, 1316066, 1315858
javamemcached: 1147367, 1137035, 1147104
xmemcached: 208960, 209404
xmultimemcached: 208965

(set-multi)
grizzlymemcached(s): 854613, 854788
grizzlymemcached(c): 853184


* thread 400, value 32bytes, loop 125
(get-multi)
grizzlymemcached(s): 1920122, 1891360
grizzlymemcached(c): 1877581, 1873922, 1916443
spymemcached: 1296075
spymultimemcached: 1203427
javamemcached: 1090702
xmemcached: 208815
xmultimemcached: 210036

(set-multi)
grizzlymemcached(s): 846482, 848954
grizzlymemcached(c): 847773

Thanks!

Regards,
Bongjae Chang

From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
Reply-To: <dev_at_grizzly.java.net>
Date: Fri, 27 Jan 2012 17:13:52 +0100
To: <dev_at_grizzly.java.net>
Subject: Re: Initial commit of grizzly-memcached

    
 Hi Bongjae,
 
 wow, you did a lot of work, I'm really impressed!
 
 Here are some notes I have:
 
----------------------------------------------------------------------------
----------------------------------------------------------------------------
 1) GrizzlyMemcachedCacheManager may have a state flag to forbid its usage
after/during shut down.
 For example if you look at shutdown method, it looks like under certain
conditions, when one of the threads call shutdown() and another calls
addCache(), it's possible that newly created Cache will stay in the caches
map and never get closed.
 
 2) GrizzlyMemcachedCache.CONNECTION_POOL_ATTRIBUTE_NAME
 was there specific reason to use String-based attributes? IMO Grizzly
org.glassfish.grizzly.Attribute-based access might be faster. (may be it was
due to AttributeBuilder bug you found?)
 
 3) EvictionTask
 IMO it could be optimized, may be we shouldn't take snapshot and use
queue.size() method (which is pretty heavy for LTQ). May be it might be
enough to use pool.queue directly and poolSizeHint value as pool size?
 
 4) MemcachedClientFilter.handleWrite(...)
 When we write to the same connection from multiple threads at the same
time, we can break request-response correlation... and unfortunately this
issue can not be fixed at the moment.
 The problem is that you have a request queue, where adding requests during
the MemcachedClientFilter.handleWrite(...) execution. Then, when processing
response, you peek a request from a queue and get request-response pair.
 But if write() method is called from different threads, the order we add
requests to the requests queue may be different than order these requests
will be sent on wire. To fix this issue, we have to add requests to the
queue right before specific request is going to be sent on wire. This is not
doable for now :(
 
 I have a fix for this in my local repo, but don't want to commit it for
2.2.2 release, cause it includes API changes. I'm going to commit the fix
right after 2.2.2 release.
 Lets just keep this issue in mind (or better file in JIRA :) ) and fix it
in Grizzly 2.3?
----------------------------------------------------------------------------
----------------------------------------------------------------------------
 
 In you handleWrite method, I noticed optimization flag to switch from
single to composite buffer usage. Just curios, do you have any numbers to
share, which approach works better in which conditions?
 
 Thanks a lot!!!
 
 WBR,
 Alexey.
 
 
 
 On 01/18/2012 02:20 PM, Bongjae Chang wrote:
>
>
>
> Hi Alexey,
>
>
>
>
> I commited them. And I will continue to update, benchmark and optimize them.
>
>
>
>
> If you can review them and find bugs or improvement points, please let me know
> or fix them(Maybe I think that it is not easy for you to review them now
> because they don't have documents and javadoc yet. I will update them as soon
> as possible). :-)
>
>
>
>
>
> Thanks!
>
>
>
>
> Regards,
>
> Bongjae Chang
>
>
>
>
>
>
>
> From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
> Reply-To: <dev_at_grizzly.java.net>
> Date: Wed, 18 Jan 2012 13:37:16 +0100
> To: <dev_at_grizzly.java.net>
> Subject: Re: Initial commit of grizzly-memcached
>
>
>
>
>
>
> Hi Bongjae,
>
> On 01/18/2012 12:22 PM, Bongjae Chang wrote:
>>
>>
>>
>> Hi,
>>
>>
>>
>>
>> I am trying to support memcached client based on Grizzly.
>>
>>
>>
>>
>> Now, I tested only basic commands. But I should do more works such as making
>> javadoc, benchmarking with other clients, optimizing it and etc...
>>
>>
>>
>>
>> So, could I commit them first in grizzly/extras/memcached like
>> grizzly/extras/thrift?
>>
>>
>>
> Sure, go ahead.
>
> Thank you very much!
>
> WBR,
> Alexey.
>
>
>>
>>
>>
>>
>>
>>
>> Thanks.
>>
>>
>>
>>
>>
>> Regards,
>>
>> Bongjae Chang
>>
>>
>>
>>
>
>
>
>