users@jersey.java.net

[Jersey] Re: Experiences with HTTP pipelining?

From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
Date: Mon, 28 Mar 2011 10:50:03 -0700

On 03/28/2011 09:50 AM, Tatu Saloranta wrote:
> On Mon, Mar 28, 2011 at 4:44 AM, Casper Bang<casper.bang_at_gmail.com> wrote:
>> Hello fellow Jersey users,
>>
>> In designing a REST resource for uploading binary telemetric data,
>> I'll be in a situation of consuming potentially 30+ small requests a
>> second over a period of a minute or two. In the traditional WS-*
>> world, I would've used more course-grained wrappers, and although
>> that's still possible I guess, it doesn't give me some of the nice
>> properties of REST (fine grained error reporting, easy to reason
>> about, test etc.).
>>
>> I am not so much concerned about saturating the endpoint, as I'd
>> simple use a thread pool. I am more concerned about the overhead
>> associated with opening so many connections. Are there mechanisms in
>> place for supporting HTTP pipelining in Jersey, or is this a
>> transparent issue only relevant to the container and http client? Any
>> users out there who have investigated the issue of letting the
>> transportation layer deal with abstracting many small logical requests
>> to fewer larger ones?
> Unless those are fully concurrent requests, HTTP 1.1 connection reuse
> should work just fine in avoiding need to open that many connections.
> This is implemented at lower level by servlet container (Jetty or
> Tomcat) on server side; and by http client lib on client side.
> Connection reuse is about as simple as it sounds; after a logical
> request is complete, physical connection may be reused.
> There are usually couple of settings to define things like how long an
> idle connection remains open, and for how many requests can a
> connection be reused.
>
> HTTP 1.1 does also define HTTP pipelining, which would allow use of a
> single physical connection for multiple logical requests, sort of
> concurrently (i.e. allow sending new requests even before matching
> responses have been received). However, HTTP pipelining is not
> supported by many libraries, and there isn't even general consensus on
> whether it even makes sense to support it.
> There are complexities in error handling, as well as potential for
> added latency if/as results must be sent in same order as requests.
>
> So depending on latencies involved, it is not always necessary to
> worry too much about connection setup overhead. There are other
> potential benefits from batching, so app/framework level batching
> sometimes makes sense. But as with pipelining it does complicate
> things, and it is rather difficult to do nicely in generic way. This
> is why most often batching is done at application level.
>

Agree with Tatu,
just wanted to add that Grizzly also supports HTTP pipelining, but
frankly don't remember much products which use it (if I'm not mistaken I
remember just MS webservices client).

WBR,
Alexey.