jsr339-experts@jax-rs-spec.java.net

[jsr339-experts] Re: JAX-RS 2.0 API for asynchronous server-side request processing

From: Marek Potociar <marek.potociar_at_oracle.com>
Date: Wed, 27 Jul 2011 21:40:34 +0200

On 07/25/2011 08:04 PM, Markus KARG wrote:
> Marek,
>
>>> The target of decoupling can be reach rather easily:
>>> @GET Future<MyEntity> getAsync() { /* Use Executor to return Future
>> instance */ }
>>> In case the return type is "Future<?>" the JAX-RS runtime can safely
>>> assume that the method will return without providing a physical result.
>>> The thread can return, and as soon as the Future is provided it can
>>> pick up work again and send back the response. No need for any
>>> additional complexity or annotations.
>> Does it mean that some JAX-RS thread has to be blocked while waiting
>> for the response? Also, if we decide to go with
>> your simplified proposal, how would you later evolve it into something
>> suitable for a Pub/Sub support?
>
> The calling thread is not getting blocked, as the method obviously returns immediately having a Future in hands. The JAX-RS engine sees the fact that it is a Future and knows "Hey, the result will be provided later, so let's put that one on a stack for now and continue with some different work.". In fact, the identification of Future allows the same thread to do other work instead of getting blocked. It can be used to answer a different response for example, or for any other asynchronous work that is queued by the engine (like asynchronous logging in the background for example, or re-balancing work queues). The creation of the future is done by an ExecutorService, hopefully one provided by the container, so the container admin has control over number of threads and priorities etc. Certainly we could allow private Executors (Java SE, created manually) also, but certainly a shared ExecutorService is more efficient.

My point is that there has to be a (set of) thread(s) that periodically checks if the futures are complete. These
threads are effectively blocked actively waiting and consuming resources.

Also, it would be IMHO difficult to prove that a shared and JAX-RS provider managed ES is going to be more efficient to
a custom ES in the open set of use cases we face.

>
> Pub/Sub support is currently not on our agenda

I disagree.

> and I actually doubt it is RESTful.

Unless you provide some additional reasoning, the argument is kind of cheap :)

> Anyways, if we decide to do Pub/Sub then a programmer could use something rather simple as a @Singleton EJB queue for messages, into which any @POST method can post (na additional magic needed). The distribution is just as simple: Someone asking for the next message (in the COMET sense) invoked @GET which returned Future<Message>. The response is kept on hold, the thread is free to do something different meanwhile. When the singleton receives a post, it looks up all the waiting Futures and fulfils them. Waiting does not necessarily mean "active spinning" or "being idle", but it can be done in a way that works with posting work in executors as soon as messages are there to get distributed. The singleton just needs a reference to the response-sending engine to put the filled Futures on a send queue (what you called "AsyncContext" if I remember correctly).

Not all JAX-RS users run their apps in a full JavaEE environment. I would prefer we don't try to introduce features that
conditionally work only in some environments.
>
> Do I oversee something?

The EJB dependency of such solution.

>
>>> If we want to give the server more control on the threads, we could
>> add something like "getExecutorService()" to the context to prevent
>> people from creating their own executors.
>>
>> IMHO, this may certainly be useful but I can also imagine people want
>> to use their own ES in many async cases. Would
>> that be possible?
>
> Yes certainly. My proposal says that the JAX-RS engine identifies the fact whether to immediately be async or not just by "if instanceof Future". Where the ExecutorService comes from is of no interest for this. We can inject on provided by the container, but a user can also just use the Java SE Executors factory or what ever he likes to provide a Future. Future just serves as an async indicator to prevent another annotation.

Let me think more about using the future as the async indicator and how it would work in a bigger picture.
>
>>> So, the question is: Do we need anything more complex like this? I
>> actually would say "no" in the first draft, as e.g. COMET is not a
>> target of JAX-RS 2.0 according to the project description on JCP.org (I
>> would accept that complexity if COMET would be a target).
>>
>> Not all pub/sub scenarios need to use Comet. Scenario in which one or
>> more requests are waiting for another request from
>> another client to resume seems quite common.
>
> Seems you more care for Pub/Sub than for async. ;-)

Actually, I do care for both. A feature of resuming multiple responses based on a single event seems interesting enough
to explore it in more detail.

>
> What scenario do you have in mind that is not possible with my proposal?

It's not a question of being possible or not. If we just provide a way of resuming a single request, then everything is
possible. It does not however mean that it would be easy at the same time. I am shooting for making it easy.

>
>>> Another simple API would be:
>>>
>>> @GET @Async MyEntity getAsync() { return new MyEntity(); }
>>>
>>> This one could be just sugar that makes JAX-RS queue up the method
>> execution, implicitly doing the same than above (use a singleton JAX-RS
>> executor in standalone cases, or a container's executor service in Java
>> EE cases, to produce a Future<?> by putting work on a queue). The
>> question with this one would be: Why not putting *all* requests on a
>> queue? We could omit @Async then, the user has no need to even think
>> about it, and JAX-RS could internally decide by its own algorithm how
>> to handle all responses in the queue. The API then would be exactly the
>> same as for non-async calls.
>>
>> I was thinking about the solution above too, but it seems to fail to
>> satisfy decoupling the request processing from the
>> framework threads. Since there is no notion of executor service as a
>> resource in JavaEE yet, I'm not sure how we could
>> effectively tell the framework to use a user-configured and managed ES
>> via annotations.
>
> Why is that not decoupling? As soon as the caller thread sees @Async he just can put the request into a queue and work something else, while a different thread can pick the request from the queue and fulfil the method body. Perfectly decoupleable (just like SwingWorker does).

What different thread? Is it still managed by framework? If yes, then it's not decoupled from the framework threads, in
fact it's then just an implementation detail.

Btw., the idea of introducing the queues reminds me of SEDA, would be a truly interesting RFE, but not sure we should go
that far right now.

>
> Well in fact each Java EE server must have a configurable worker pool as it actually provides a link to that to a JCA RA (in some rather strange form of work queue). So I think it will be possible that we can have that link, too (we just need to talk to the Java EE guys and define that).

I did talk to the EE guys. The problem is that right now the worker pool is not standardized and is configured typically
per container, not per app or, per tenant (we need to keep in mind this new concept). I did receive some hints that some
worker mechanism will be introduced as a result of JSR-236 effort (that takes over the tasks of the discontinued
JSR-237), but I have yet to receive some more details and confirmation from Naresh (236 spec lead).

>
>>
>>>
>>> So to keep things simple, I want to ask that we first reduce to this
>> two simple proposals. If we see a need to support something more
>> complex covered by our mission statement, we should then think over how
>> to add that ontop of this simple API. I actually don't like to discuss
>> a rather complex API from the top to bottom which already covers things
>> not clearly part of our mission, which make the API more complex to use
>> than actually needed.
>>
>> Again, this is not a proposal yet. This paper tries to provide the
>> background for what we need to consider as part of
>> the proposal. But I would discourage from splitting the async server-
>> side topic into two. We would keep a list of
>> requirements in the topic and decide which requirements will be covered
>> and which not.
>>
>> At the same time, I fail to see how the topics, including pub/sub,
>> covered in the paper go beyond our mission statement.
>> From the JSR:
>> "This JSR will specify a simple asynchronous request processing model
>> such that a response can be returned asynchronous
>> to the request. ..."
>
> "...SIMPLE..." !
>
> Pub/Sub is anything but simple. If it would be, the JMS spec would be much shorter. ;-)

All right, forget pub/sub - just think about the possibility to resume multiple requests at once, based on some scheme
or based on the same context used for suspending etc. I used the term pub/sub to draw a parallel and point out a typical
use-case. Sorry for the confusion.

>
> The most simple way to asynchronously return is to use Future as an indicator and whenever found put that request in an executor service. That would fulfil our mission. So Pub/Sub IS far beyond our mission from what I read in that charter.
>

Resuming multiple requests on a single event certainly does *not* contradict or reach beyond our mission statement.

>>
>>>
>>> For the non-blocking support I think this would be very valueable and
>> we should add it to the API.
>>
>> What are the use cases that the async client API cannot sufficiently
>> cover?
>> To get a better picture, can you provide a discussion with some
>> examples how we could support this feature? Feel free to
>> add a new page to the wiki and reference it from the existing page...
>
> Unfortunately I have not the time to set up a complete WIKI page, so let me give some short introduction.
>
> For example, the first GET returns a XML file with an Order and that contains a list of let's say 100 items, each item having it's own XML data file plus a PNG photo. The user expects to get all data as fast as possible. All items and photos come from different servers, so by using "parallel" retrieval it could in thery be very fast. Certainly this could be done by using several threads, but with more threads there will be more overhead. If instead only few threads are getting used, and a non-blocking API is getting used, that few threads could work on all those lots of GETs "parallel", with much less overhead. Especially if the client machine is some embedded device, this can make a substantial difference. That's why I think that non-blocking client might be more beneficial to the user than async client: Because if there wouldn't be async, the user can just use his own executor around the client API. But if there is no non-blocking client, the user cannot work around that.

>
I still fail to see your point.

What you described can be done even today by the combination of e.g. Jersey + Grizzly (or Glassfish, that uses Grizzly
for I/O). Still, the I/O connection management is not part of the JAX-RS framework job, this is done typically by a
lower layer used by the JAX-RS framework.

Marek
>>
>> Thanks,
>> Marek
>>
>>>
>>> Regards
>>> Markus
>>>
>>>> -----Original Message-----
>>>> From: Marek Potociar [mailto:marek.potociar_at_oracle.com]
>>>> Sent: Mittwoch, 20. Juli 2011 20:50
>>>> To: jsr339-experts_at_jax-rs-spec.java.net
>>>> Subject: [jsr339-experts] JAX-RS 2.0 API for asynchronous server-
>> side
>>>> request processing
>>>>
>>>> Hello experts,
>>>>
>>>> As announced last week, we have moved the asynchronous server-side
>>>> processing API in front of some other topics in the
>>>> schedule. I have wrote up a wiki page[1] discussing the problem
>> domain
>>>> in which I am also including some motivating
>>>> examples, initial requirements and open topics to start the expert
>>>> discussion.
>>>>
>>>> Please read through the wiki page[1] and send your thoughts to the
>> EG
>>>> list. Also, feel free to add new sections,
>>>> reference links, requirements or open topics to the document, just
>> make
>>>> sure to mark the additions with your name.
>>>>
>>>> Let's shoot for finishing the first round of collecting the initial
>> EG
>>>> feedback on Friday (July 29) next week.
>>>>
>>>> Thank you,
>>>> Marek
>>>>
>>>> [1] http://java.net/projects/jax-rs-
>>>> spec/pages/AsyncServerProcessingModel
>>>
>