users@jersey.java.net

Re: [Jersey] Async processing & https support

From: Paul Sandoz <Paul.Sandoz_at_Sun.COM>
Date: Mon, 03 Aug 2009 11:33:25 +0200

On Aug 3, 2009, at 11:15 AM, tarjei wrote:

> Hi,
> On 08/03/2009 10:38 AM, Sam Zang wrote:
>>>> I would like to know if there is async processing support
>>>> in Jersey.
>>> What type of support do you want?
>>> Comet support using Jersey is being defined in the Atmosphere
>>> project:
>>> http://atmosphere.dev.java.net
>>> There are also some improvements to the client side we can make for
>>> supporting async. requests (currently only Future is supported) via
>>> callbacks. More on this in a bit.
>>
>> I am thinking of the simpler case. My application does
>>
>> @Path("/mypath")
>> public class MyResource {
>>
>> @POST
>> @Consumes("application/json")
>> @Produces("application/json")
>> @Path("/checkcashing")
>> public CheckCashingResponse checkcashing(CheckCashingRequest req)
>> {
>> ...
>> }
>> }
>>
>> The POST request may need to use remote resource
>> so it could take some time.
> Maybe you are looking for something like:
>
> client: POST /someresource (post contains some kind of id)
> server: 100: Continue
> (client waits x ms , server spawns a future that does the work)
> client: POST /someresource (post contains some kind of id)
> server: 100: Continue
> (server is not done yet)
> client: POST /someresource (post contains some kind of id)
> server: 200: Ok + reply
>
> ?
>
>> I am looking for a way to suspend the request and
>> resume it when the processing is done.
>
> When should the request return? What is async about this? Remember
> that resources are not singletons but are created for every request.
>

For certain cases you want to suspend server-side request, r say, so
that the thread can be used for processing other requests.

When information is available to produce a response, for r, we can
resume the server-side request and return the response.

It is all about making more efficient use of server-side resources.
For example imagine if you had a thread pool of 10 threads for
processing requests. And 10 clients make calls for "long processing"
transactions, then any further client is blocked until one of the 10
clients returns. If it is possible to suspend and resume connections
it is possible to get better scalability.


As you indicate there are other ways to achieve this that involve the
client being involved. One can return an 202 Accepted response with a
URI to the resource where information can be retrieved when it is
ready. This requires the client to poll, and may be delete the
resource after it has processed information. There are cases where
this is very useful, e.g. long running transactions that may fail and
you do not want the client waiting on a connection.

Paul.