[jsr369-experts] Re: [servlet-spec users] Re: HTTP 2.0 specific configuration parameters in the API?

From: Greg Wilkins <gregw_at_intalio.com>
Date: Thu, 25 Sep 2014 04:56:45 +1000

On 23 September 2014 04:11, Wenbo Zhu <wenboz_at_google.com> wrote:

> IMO - there are two styles to support flow-control for any async API, on
> either the client or server side.
> The first style requires the client and server to handle effectively what
> TCP does. There are many issues, e.g. the API is complicated, and worse,
> per-stream flow control isn't necessarily what the application wants, and
> it's very hard to identify an optimal window size, either at run-time or at
> design time.
> The 2nd style: for reads, it requires the application to signal to the
> "I/O" layer whenever the application is ready for more data; and for
> writes, it requires the I/O layer to signal to the application whenever the
> I/O is ready for more data. Whether the flow-control done at the
> stream-level (as transport protocols) or at the process level is
> transparent to the API/application.
> Lastly it is impossible to have transparent flow-control. E.g. the fact
> that the WebSocket (client) API lacks flow-control can't be remedied at the
> I/O or protocol level.


I don't think it is workable to allow applications to have undue influence
over flow control. Essentially flow control is a fairness mechanism to
ensure that scarce resources are shared between competing concerns.
Unfortunately application developers/deployers are not the most objective
parties to determine relative priorities of traffic and most will not
voluntarily give up capacity so that other potentially unrelated concerns
can proceed.

Thus I think any API that allows applications to set window sizes could
expect to see Integer.MAX_VALUE passed frequently.

I think that transparent, or at least mostly transparent flow control is
our only real option. If an applications reads or writes more data than is
available either in the TCP windows or the h2 windows it will simply block
(or asynchronously wait) until it can proceed.

But that is not to say that applications might not be able to contribute
some information that will allow a more intelligent implementation of flow
control. So there are a few parameters that a flow control algorithm can

   - absolute size of connection window. ie how much resource is the
   container prepared to commit to a single connection.
   - absolute size of stream window. ie how much resource is the container
   prepared to commit to a single request/response cycle
   - Window hysteresis. how eager should the container be to replenish a
   declining window? should it send window updates on every byte consumed, or
   should it let the window drain for some size/time constrain before

Factors that inform how these parameters are set include:

   - expected and/or enforced max/mean connections per server.
   - expected and/or enforced max/mean concurrent streams per connection
   - max/mean request/response body size
   - max/mean write/read size
   - network/connection capacity

To discover these factors, there are a number of approaches that can be

   1. These can be considered all a matter for deployers, who should know
   their applications deployed thus the factors above, so they will set
   appropriate flow control parameters.
   2. Applications can declare values for the factors above that can be
   used to heuristically set the flow control parameters. This will involve
   trusting that applications will declare realistic values.
   3. Containers can measure the factors of the deployed application(s) and
   dynamically adjust the flow control parameters.

Personally, I'm not sure we know enough about flow control yet to do much
more that 1. Ideally we could work towards 3., but I'm yet unsure if we
will need any values that an application may declare in 2.


Greg Wilkins <gregw_at_intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.