users@grizzly.java.net

Re: Grizzly production ready? ConcurrentHandler & scenarios

From: Jeanfrancois Arcand <Jeanfrancois.Arcand_at_Sun.COM>
Date: Wed, 18 Mar 2009 11:32:49 -0400

Salut,

felixx wrote:
> Hi,
> Hmm, now i realize that loosing events (long poll) may happen. I was under
> the impression that the ConcurrentHandler's embedded queue would keep the
> posted events until the next poll starts. However looks like this won't work
> because the handler gets removed from the context after the response is
> committed.

Yes, if you want "delivery of garantee" mechanism (to make sure no
message are lost), you must implement a NotificationHandler

http://weblogs.java.net/blog/jfarcand/archive/2007/03/new_adventures_1.html
https://grizzly.dev.java.net/nonav/apidocs/com/sun/grizzly/comet/NotificationHandler.html


which will take care of that for you.

We do have a problem. Because streaming doesn't always
> (buffering proxy is one case) work I need to be able fall back on long
> polling but loosing event is not acceptable. For long polling there should
> be a way to keep a handler with an internal queue into the context but
> temporarily marked as 'resumed' (so the posted events can be enqued but no
> i/o will take place). When the next poll starts, the handler should be moved
> to 'i/o enabled' state and any queued events written (as a batch).
> This is a big problem for me and, I don't know yet how, I must find a
> hack/workaround ... unless you have a better suggestion.

I really think the NotificationHandler approach can be a good solution
as inside you can keep some kind of storage where you correlate the
"resumed" CometHandler with its messages.

We (Sun) are offering a similar mechanism under the GlassFish Enterprise
  suite (but you have to paid). Technically it is a NotificationHandler :-)

>
> The '200 threads' pool would be for pushing/writing the response. Is this
> number configurable?

Yes you can set the ExecutorServices yourself on a CometContext...but
which GF version or Grizzly version are you planning to use?

> In the 2nd app, the client is flash and an event is a flash movie frame that
> can be from 0.5k to 100k. It's a way of simulating a progressive download
> (to avoid rtmp)

A+

-- Jeanfrancois

>
>
> Jeanfrancois Arcand-2 wrote:
>> Salut,
>>
>> felixx wrote:
>>> Hi and thanks for your comments and for 1.9.9
>>> I want you guys to have a big hit at JavaOne so I'll take your word that
>>> the
>>> news will be good.
>>> Regarding the number of threads used to execute the handlers, I just feel
>>> that this way we play it safer and a bunch of lazy clients/weird network
>>> condition will not affect everybody. Also I would like the latency to be
>>> almost the same for everybody.
>>> I have 2 scenarios/apps:
>>> 1. small payload, under 1k, will try first streaming (I guess this is
>>> lighter for the server) for each client and fall back to long polling if
>>> the
>>> client/network doesn't behave;
>> That looks promising...but note that as soon as you long poll you may
>> start loosing message in between reconnection.
>>
>>
>>> 2. only streaming, payload can be big, 50-100K
>>> There are to many variables so i'll stop here. So short story do you
>>> think
>>> having a bigger treads pool would help? 100-200 are not so expensive in
>>> terms of memory & switching and if it helps to play it safer, why not?
>> Here you means the thread pool for accepting request or pushing request?
>> Take a look at:
>>
>> http://cometdaily.com/2008/04/01/new-technology-new-problems-delayed-pushes-operations/
>>
>> I agree 200 is not expensive, but depending on what your application
>> does it may not be optimal. Are you writing large chunk?
>>
>> A+
>>
>> --Jeanfrancois
>>
>>>
>>> Jeanfrancois Arcand-2 wrote:
>>>> Salut,
>>>>
>>>> felixx wrote:
>>>>> Salut et merci, I have to say you almost convinced me :-)
>>>>> 1. Can you share some results please, just to get an idea, not details.
>>>>> What
>>>>> I'm trying to achieve is not so big. Let's say 5000 long poll connected
>>>>> clients, 2000 sent messages/s (I.e. 1 msg sent to 2000 clients).
>>>> I need to ask my co-speaker as we were planning to do a big hit during
>>>> JavaOne :-). Probably yes :-)
>>>>
>>>>
>>>>> Is it true that their solution is close to what is considered to be
>>>>> added
>>>>> in
>>>>> servlet 3.0?
>>>> No, Servlet 3.0 is a complete re-think of Comet. Nothing from
>>>> Grizzly/Jetty/Tomcat has been re-used. The new Async features build on
>>>> top of the preliminary work done on those containers. The main features
>>>> is to be able to suspend/resume the response and to be able to dispatch
>>>> a suspended response to another resources. This is a subset of what
>>>> Grizzly Comet supports (but I'm pretty sure a lot of framework will
>>>> build on top of the 3.0 api).
>>>>
>>>>
>>>>> If detecting when the client is not present or able to receive messages
>>>>> anymore it's based on catching an IOEx when writing the response, based
>>>>> on
>>>>> what I've seen using Tomcat, I'm not sure how well this will work. To
>>>>> be
>>>>> really sure I would rather have a client timeout mechanism on the
>>>>> server
>>>>> to
>>>>> detect disconnected clients (check when the last ping event was
>>>>> successfully
>>>>> sent).
>>>> Right, this is how you need to do that right now (except with Grizzly,
>>>> who can detect a client close). With other, you need a thread that ping
>>>> the connection to see if it is still open or not.
>>>>
>>>>
>>>>> 2. I guess I was mislead by seeing in the examples the handler being
>>>>> created
>>>>> in doGet and by the doc saying: " ... Resume the Comet request and
>>>>> remove
>>>>> it
>>>>> from the active CometHandler list. Once resumed, a CometHandler must
>>>>> never
>>>>> manipulate the HttpServletRequest or HttpServletResponse as those
>>>>> object
>>>>> will be recycled and may be re-used to serve another request. If you
>>>>> cache
>>>>> them for later reuse by another thread there is a possibility to
>>>>> introduce
>>>>> corrupted responses next time a request is made. ... "
>>>>> So I should be able to cache it in client's session, attach the new
>>>>> Response
>>>>> in the nxt poll and then add it back to the context?
>>>> Yes.
>>>>
>>>> I'll have to check the
>>>>> code to understand how it works. I think it would be useful to be able
>>>>> to
>>>>> add the handler to the context using a custom ID. I'm wondering what
>>>>> happens
>>>>> if I add twice the same handler instance.
>>>> The Handler will be invoked many times. But point taken...I will improve
>>>> the javadoc on that front.
>>>>
>>>>> 3. I think this is explained in 2 now.
>>>>>
>>>>> 4. I need to understand how async http write would work. But anyway,
>>>>> just
>>>>> to
>>>>> use a brutal approach, do you see a problem in having a pool of
>>>>> 200-300
>>>>> dispatching threads just to be sure?
>>>> It really depend on what the application writes/payload. Can you
>>>> elaborate?
>>>>
>>>> Thanks!
>>>>
>>>> -- Jeanfrancois
>>>>
>>>>
>>>>>
>>>>> Jeanfrancois Arcand-2 wrote:
>>>>>> Salut,
>>>>>>
>>>>>> felixx wrote:
>>>>>>> Hi all,
>>>>>>> I'm trying to decide on a Comet server side solution. My first
>>>>>>> preference
>>>>>>> would be to go GF V3 and Grizzly. However based on the existing
>>>>>>> issues
>>>>>>> list
>>>>>>> (475, 478, 479) I do have some concerns regarding the
>>>>>>> stability/maturity
>>>>>>> of
>>>>>> 475: bad test (just recently added, OS dependent failure)
>>>>>> 478: Javascript problem (and i'm quite bad with javascript)
>>>>>> 479: Maybe an issue with cometd in GF v3.
>>>>>>
>>>>>> So nothing related to Comet (server side) :-). We lacked internal
>>>>>> testing for awhile (situation fixed now :-)). Grizzly Comet is stable
>>>>>> but true some minor release 1.9.x might introduce minor regressions).
>>>>>>
>>>>>> If your looking at an immature but nice Comet Framework, look at
>>>>>> Atmosphere.dev.java.net (shameless plug as I'm releasing 0.1 today
>>>>>> :-))
>>>>>>
>>>>>>
>>>>>>> the Comet implementation. Here are some questions please:
>>>>>>> 1.Do you guys know/used the Grizzly Comet in a real production env. I
>>>>>>> know,
>>>>>>> small examples with a few clients would work but how about having
>>>>>>> 5000
>>>>>>> clients and 2000 messages/s? Has anyone tested/benchmarked such a
>>>>>>> scenario.
>>>>>> That will be the topic of my javaone session this year :-) ...yes we
>>>>>> have done some test with that but it really depends on the application
>>>>>> itself, long poll vs http streaming, etc. We have two automated
>>>>>> benchmark inside the workspace:
>>>>>>
>>>>>> https://grizzly.dev.java.net/nonav/xref-test/com/sun/grizzly/comet/CometUnitTest.html
>>>>>>
>>>>>> which can always be looked at. Of course they aren't real benchmark
>>>>>> with
>>>>>> a real application.
>>>>>>
>>>>>>
>>>>>>> It would be unacceptable for us to have the app hanging or being
>>>>>>> unreliable
>>>>>>> (see the above issues) I've seen some references about GF performance
>>>>>>> but
>>>>>>> I'm referring strictly to the Comet side. Not easy to say it but, and
>>>>>>> I
>>>>>>> may
>>>>>>> be wrong, it looks like Jetty has been used in such scenarios, worked
>>>>>>> fine
>>>>>>> and seems more mature in this area.
>>>>>> Difficult to convince you here as Jetty was the first one on the
>>>>>> market,
>>>>>> and at the time GlassFish was just moving from its "zero" to its
>>>>>> "hero"
>>>>>> role :-). But the way Jetty handle suspend/resume cannot scale as
>>>>>> every
>>>>>> time you suspend, it throws an exception and your Servlet ends up
>>>>>> being
>>>>>> invoked twice. I like Jetty, but the way suspend/resume has been
>>>>>> implemented cannot scale IMO and the logic needed inside a Servlet
>>>>>> (double invokation) make the programming model complicated.
>>>>>>
>>>>>> Not only that, but with Jetty you don't have any support detecting
>>>>>> when
>>>>>> clients close connection (which can cause memory leak with some
>>>>>> application because of wasted resources), something grizzly supports.
>>>>>>
>>>>>> Also the way information/message are pushed in Jetty has to be
>>>>>> implemented by the application itself, where in Grizzly you can easily
>>>>>> filter/throttle/push/cluster using the CometContext.notify() and
>>>>>> NotificationHandler. This is less work to do for an application's
>>>>>> developer.
>>>>>>
>>>>>>
>>>>>>> 2.Maybe I'm missing something here but is there a way to avoid
>>>>>>> recreating
>>>>>>> a
>>>>>>> CometHandler every time the same client reconnects for a long pool?
>>>>>> You don't have to recreate. Just re-add it to the list of suspended
>>>>>> response (CometContext.addCometHandler())
>>>>>>
>>>>>> I would
>>>>>>> need a ConcurrentHandler type (with the internal event queue) and
>>>>>>> seems
>>>>>>> expensive to have for each poll cycle thousands of handlers being
>>>>>>> created
>>>>>>> and then dropped when an event has ocurred. Why not just being able
>>>>>>> to
>>>>>>> replace the embedded Response and keep the existing handler as long
>>>>>>> as
>>>>>>> the
>>>>>>> client is present?
>>>>>> CometHandler.attach() is for that. But you still need to invoke
>>>>>> resumeCometHandler followed by addCometHandler()
>>>>>>
>>>>>>
>>>>>>> 3.Here's a scenario I've seen using a different 'long polling'
>>>>>>> solution.
>>>>>>> Due
>>>>>>> to a network condition the server is not able to push the events and
>>>>>>> these
>>>>>>> are accumulating in the handler's queue. (not always the server is
>>>>>>> able
>>>>>>> to
>>>>>>> see right away that the client is not able to receive the events).
>>>>>>> The
>>>>>>> client has a mechanism to detect if ping events are received and if
>>>>>>> not
>>>>>>> will
>>>>>>> try again to connect. Now you will end having 2 handler for the same
>>>>>>> client,
>>>>>>> the old one with some events in the queue, also blocking a thread,
>>>>>>> and
>>>>>>> the
>>>>>>> fresh one. How can we handle this type of situations in? In a
>>>>>>> different
>>>>>>> system I used just to detect the situation, replace the Response and
>>>>>>> start
>>>>>>> pushing the content of the queue.
>>>>>> why do you think creating CometHandler are expensive? Event if they
>>>>>> are
>>>>>> (based on your application), you can alway use the
>>>>>> CometHandler.attach()
>>>>>> to recycle them. Did I misunderstood you?
>>>>>>
>>>>>>
>>>>>>> 4.Even with a ConcurrentHandler you may end having a lot of threads
>>>>>>> blocked
>>>>>>> because of slow clients. It looks like if you expect 5000 clients
>>>>>>> it's
>>>>>>> safer
>>>>>>> to use let's say 400 threads to execute the handler. Any comments on
>>>>>>> this?
>>>>>> It's up to your application to either:
>>>>>>
>>>>>> + enable async http write in Grizzly (so no blocking thread) insid
>>>>>> ethe
>>>>>> application (grizzly internal will handle the "blocking" for you
>>>>>> instead).
>>>>>> + define a "strategy" inside NotificationHandler to detect such slow
>>>>>> client and either discard them or park them inside a "queue" for later
>>>>>> write.
>>>>>>
>>>>>> Note that the same issue will arise with Jetty :-)
>>>>>>
>>>>>>
>>>>>>> A lot of questions, I know. Again GF & Grizzly looks very promising,
>>>>>>> great
>>>>>>> work so far, I'm not just sure, and I need your help to convince
>>>>>>> myself,
>>>>>>> it
>>>>>>> is ready for a real life application.
>>>>>> You are welcome. We have multiples implementation inside and outside
>>>>>> Sun
>>>>>> that build on top of Comet. One of them is our internal Sun IM
>>>>>> product...so there is a possibility of ~30 000 users at the same time
>>>>>> using it :-)
>>>>>>
>>>>>> Feel free to continue the discussion if I was unclear of if I missed
>>>>>> the
>>>>>> point.
>>>>>>
>>>>>> A+
>>>>>>
>>>>>> -- Jeanfrancois
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Thanks!
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>>>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>>>>
>>>>>>
>>>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>>
>>>>
>>>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>
>>
>>
>