On Thu, 2012-07-12 at 16:23 -0700, Danny Coward wrote:
> Hi folks,
>
> I think this will probably get to be a discussion, so let me start the
> ball rolling:-
>
> As the API stands, each Endpoint handles the lifecycle events of its
> multiple peers, there is a choice as to whether to say that the
> implementation is allowed to call Endpoint methods concurrently, or
> sequentially. The latter might be more convenient for developers; and
> may not 'cost' too much in scalability since one could argue that
> connect/disconnect events are not so common as, say, messages going
> back and forth.
>
> And if the developer uses a separate MessageListener instance per Peer
> (as the API suggests), then the onMessage methods on each instance are
> only ever called sequentially (assuming I'm correct that the websocket
> protocol is designed with only one message at a time per socket).
>
> Would that approach, essentially meaning that only one container
> thread at a time hit the connect/disconnect and onMessage callbacks at
> a time an acceptable balance of developer convenience and
> scalability ?
>
> Or should we think of specifying something more like the servlet
> model, where connect/disconnect and onMessage callbacks could be
> called concurrently by multiple threads. More of a burden on
> developers, and would probably need some rejigging of the API.
One container thread running at a time for a given stream for me.
Otherwise, things get complex fast for applications, in most cases they
would have to do heavy syncing, which is quite likely more expensive.
--
Remy Maucherat <rmaucher_at_redhat.com>
Red Hat Inc