[jax-rs-spec users] [jsr339-experts] Re: Async API and Threads again

From: Markus KARG <>
Date: Sun, 14 Oct 2012 17:42:13 +0200

> Hi,
> bear with me - I am still trying to wrap my head around async API
> issues.
> Having thought about it for a while I am thinking right now that using
> the Async API actually only makes sense if the async response is parked
> for a while. The important aspect being having eventually more
> responses than threads dealing with them.
> In other words, it IMHO makes no sense to have an async response be
> handled by another thread (e.g. via @Asynchronous) right away because
> that would also consume one thread per response.
> I fail to see the difference between using more 'backend' threads in
> order to have less http handling threads. One could equally well just
> increase the http pool max size.
> Or yet in other words: If one does not park more than one response in a
> single thread the number of used threads will simply O(n) increase with
> the number of requests. (Which to prevent is the very reason of the
> Async API AFAIU).
> Or is there anything in any of the containers around that makes an Http
> handling thread any different from another thread?

The biggest breakthrough in server scalability (hence in server design) was
the (rather late) insight that there must always be a decoupling of request
and thread (and some years before, of request and task). Before this
milestone, a blocking request (roughly) always blocked a LAN-card, hence,
the server could only process as many clients as it had LAN-cards. In
contrast, a really sophisticated and modern http server has isolated thread
groups for application worker threads (can block, hence scale with the
number of assumed or measured blocking I/Os, typically starting with the
number of installed CPU cores + 1 and auto-scales in a configurable range
before), pure http listener threads (will not block thanks to non-blocking
channel groups
nelGroup.html], hence can be as small as the number of host adapters
installed), and work dispatchers (will not block thanks to non-blocking
which (simply spoken) transfers the requests into a work queue / responses
from work a work queue which builds the anonymous connection between the
http listeners ("the LAN cards") and the worker threads ("the CPUs"). As
soon as a JAX-RS method will block its executing thread, the consequences
arise depend on the type of thread it is bound to. If the JAX-RS container
assumes JAX-RS to be non-blocking (= annotated @Stateless, which implies
non-blocking methods by definition), it can simply bind it to a http thread
which makes the queue and dispatcher obsolet and runs really fast as it has
no thread-switch and queue-put-get to do. Blocking this thread will simply
block that LAN-card completely! If it is marked as @Asynchronous, this tells
the server that it could potentially block, so it cannot run it in a http
listener thread for safety, so the a dispatcher thread will put it in the
work queue, the next free worker thread will pick it up at some time and put
the response back in the queue, one of the dispatchers will put the response
in the http listener's queue...which all eats LOTS of time but ensures that
the LAN-cards can be served with 100% capacity at any time, even under
heaviest load (and, as you have only very few LAN-cards, and LAN is totally
slow compared to intra-JVM-work, this is the most essential thing to ensure
at any time).

So you see, it makes a huge performance and responsiveness (hence
scalability) difference whether or not using @Asynchronous, even in a
non-pub/sub scenario -- at least for full-scale services under heavy load (I
think everything besides full-scale, heavy-load services should be out of
scope for defining API which is intended to be part of Java EE 7, as
trivialities like these typically will not be implemented using JAX-RS).

> Bottom line of all this being: I think the mandatory advice on using
> the Async API would be: store more than one response in a single thread
> until response processing can be started. We should see a collection-
> type variable holding async responses. Otherwise the effect of the
> Async API would be zero at beast (given its own overhead)
> Agreed?

Not at all, since pub/sub is only one possible scenario where async is
useful. IMHO scalability is the one that will be needed by much more users
than pub/sub. As a @Stateless MUST NOT block, hence MUST NOT wait for
threads (as this foils the server's internal auto-scaling facilities
completely), it cannot simply use the sample code found in the spec for
non-EJB deployments, but it MUST USE @Asynchronous. The effect of NOT using
asnyc in all blocking scenarios is a dramatic scalability breakdown due to
the above-described thread group separation. The overhead you see is only a
problem in case people have non-blocking scenarios. But honestly, who is
dumb enough to use async in non-blocking (hence fast-return) JAX-RS methods
simply because it looks like a cool new feature? If *such* users are the
readers of the spec, then we should obviously add a warning: "You SHOULD
user @Asynchronous if you are using and kind of blocking operations,
including but not limited to JPA, JDBC, JAX-RS-non-async-Client and".