On 01/01/2015 11:54, Greg Wilkins wrote:
> 
> 
> On 1 January 2015 at 12:34, Mark Thomas <markt_at_apache.org
> <mailto:markt_at_apache.org>> wrote:
> 
>     On 01/01/2015 10:47, Greg Wilkins wrote:
>     Thanks for listing these. Any thoughts on how to express this? I guess
>     some options in web.xml.
> 
> 
> How old fashion of you!
> 
> I thought we'd do it the way we do everything else in the servlet spec:
> a combination of web.xml, xml fragments, annotations and an API with
> strange and mysterious rules for combining such mechanisms that will
> only finally be clarified in version 5.1   :)
> 
> 
> 
> 
>     >   * Expect 100 handling
> 
>     Any response > 299 hits an ambiguity in the HTTP/1.1 protocol. Once the
>     server sends the response it has no way of knowing if the client has
>     received and processed that response before the client sends the next
>     packet. Therefore, the server can not tell if the next packet is the
>     start of a new request or the request body for the current request. This
>     opens up the opportunity for request smuggling. While the spec doesn't
>     state this, we currently close the connection for any response with a
>     response code > 299.
> 
> 
> Indeed - after extensive research and consultation with the IETF WG, the
> resolution that I believe is recommended is that when expect:100 is
> sent, there are only two options in http/1.1: expect the body or close
> the connection with the response.
> 
> ie either 100 continue is sent (and jetty uses the act of calling
> getInputStream as the trigger to indicate the servlet is willing to
> consume the body (after inspecting headers)) and then the body is
> expected - OR a normal response is sent with connection close.
Agreed. Any reason not to document this as a required behaviour in the
Servlet spec?
> You cannot not send 100 and expect the client to skip the body and read
> a new request.
> 
> This is not a problem for http/2 which also supports 100 continues, but
> does not have the message boundary problems.
> 
>  
> 
>     >   * Use chunking or EOF for unknown response lengths
> 
>     If you don't use chunking you can't tell the difference between a
>     completed request and one that was terminated part way through. We
>     always use chunking to enable clients to differentiate between these two
>     cases.
> 
>     I think it would be helpful to clarify the expected behaviour for the
>     above edge cases and I offer up Tomcat's current behaviour as a starting
>     point.
> 
> 
> Agree that is the most reasonable default.  Yet we do get a steady
> stream of users with poor clients that need chunking turned off.  With
> jetty you can set Connection:close and that allows chunking to not be used.
> 
> I'm a bit marginal on this, as essentially clients should support
> chunking.  But if we want to help with broken clients having a portable
> option to avoid chunking would help.
An additional method on the response that has to be called before the
response is committed else an ISE is thrown?
public void setChunkingEnabled(boolean) ?
It would default to true (assuming a HTTP/1.1 request) and containers
could provide container specific options for changing the default.
Mark