On 01/27/2011 01:34 AM, Sri Narayanan T wrote:
> Hi Shreedhar,
> Thanks for the reply.
>
> The use case involves sharing a simple HashMap<String,String> with Key
> length = 100 chars and Value length = 200 chars
> The key-value pair are immutable - meaning a put,n reads and remove is
> all that happens.
>
> System throughput for the map
> Number of entries = 50 K
> cluster size = 12 blades (instances)
> operations per second for the cluster:
> - Write op: 30 op/s
> - Remove op: 30 op/s
> - Read op: 777 op/s
>
> Read,write and remove can happen from any instance in the cluster.
> So the average number of ops per instance will be , (total ops /
> number of instances in the cluster) ===> write = 30/12 op/s ; remove
> =30/12 op/s and read=777/12 op/s respectively.
Hi Narayanan,
When you say "Read, Write and Remove can happen from any instance, is
it for the same key? More specifically,
1. if key K1 is created in instance inst1, will the same key K1 be
read from any other instance?
2. Will concurrent threads be writing using the same key on
multiple instances? i.e., if k1 is written by thread t1 in instance1,
will the same be used for writing concurrently from another instance?
Based on your answers, we can see if shoal-cache module can meet your
requirements.
In GlassFish, the shoal-cache module is used to hold HTTP and EJB
session data. We expect the containers to be front ended by a sticky
load balancer to avoid the above mentioned conditions.
Thanks,
--Mahesh
>
> Will the default shoal DSC tolerate such a load ?
> What are the alternative design ?
>
> Sri
>
> ------------------------------------------------------------------------
> *From:* Shreedhar Ganapathy [mailto:shreedhar.ganapathy_at_oracle.com]
> *Sent:* Wednesday, January 26, 2011 5:20 AM
> *To:* Sri Narayanan T
> *Cc:* users_at_shoal.java.net
> *Subject:* Re: Shoal DSC - JMS
>
>
> Hi Narayanan
> I am sorry for the late response to your email.
> I have added my comments below.
>
> On 1/24/11 3:33 AM, Sri Narayanan T wrote:
>> Hi sreedhar,
>> Going through the shoal's default DSC implementation I see it is just
>> a hashMap kept in sync using the shoal GMS messaging.
> Yes that is correct.
>> Also , you have advised to use the default DSC for light throughput
>> scenarios.
> Yes.
>> I was wondering if this is due to the fact that the underlying shoal
>> GMS based ,inherently point to point messaging model would suffer
>> under high traffic/ heavy volume .
> Part of the reason to make this suggestion for light weight use is
> that the DSC is a shared cache implementation and by its nature will
> not scale to large number of cache entries and cache operations. Work
> on it was originally started but our focus was more on the clustering
> aspects ( see below for more on that)
>> I would like to modify the DefaultImplementation to use JMS publish
>> subscribe for data sharing.Let me know how feasible this is , also
>> will this have any impact on the rest of the clustering
>> system.Assuming it is feasible ,may I know why this was not using JMS
>> in the first place ,it would have performed well even for heavy
>> traffic then.
>> I presume it was left for the user to figure out :-)
>
> Our focus was on the clustering framework part of Shoal to support
> reasonably significant deployments for that use case - we have had
> some significant ones over last couple of years including Ericsson. We
> did not want a tertiary product dependency such as an MQ built into a
> Shoal component until we had the cycles to define a pluggability
> framework for the same nor did we have cycles to implement messaging
> in the JMS protocol.
>
> I would like to understand your use case better to respond to whether
> a JMS implementation of the DSC is feasible or even needed.
> For instance, would it help your requirements if you had an additional
> Shoal library on top of Shoal GMS that allowed you to use this library
> as a backing store cache?
>
> The Shoal team has been working on building and is close to releasing
> a Backing Store called Shoal Cache that uses GMS underneath as a
> clustering framework and for its messaging with other cluster members.
> The BackingStore library acts on API calls made by the employing code
> to manage its lifecycle, and for the lifecycle of the cache entries.
>
> Given a group of cache members, for each request to save data, a
> given cache member selects a replica member from among the group of
> members. So request 1 could get saved to cache 2, request 2 saved to
> cache 4, request 3 to cache 1, and so on.
> As a result, the cache scalably and consistently distributes data
> objects among replica members but never doing a share-all as the DSC
> does and is this capable of scaling. When a request comes for a piece
> of data, the same consistent distribution algorithm can be used to
> retrieve the data from a cache member. And the cache removes idle
> entries after a configurable threshold of time.
>
> Would the above address your use case? Please let us know more and we
> can share pointers to this upcoming release part of Shoal.
>
> Thanks
> Shreedhar
>
>> *Sri Narayanan ThangaNadar*
>> Software Engineer | Ericsson India Pvt Ltd
>> GSC India,
>> Chennai, India.
>> Mobile: +91 9003128546
>> sri.narayanan.t_at_ericsson.com <mailto:sri.narayanan.t_at_ericsson.com>
>> _www.ericsson.com_
>> <http://www.ericsson.com/>