Hello,
I have made some experiments with a simple GET /staticfile.txt,
Trials have been done with GF 3.1.2, compression is set to "forced"
Http request headers :
Accept-Encoding: gzip
Range: bytes=10-20
Response headers (extract) :
Content-Range: bytes 10-20/13244
Transfer-Encoding: chunked
Content-Encoding: gzip
Default servlet extract the 11 bytes from the file, and compress them.
If I gunzip the returned bytes, I get back the (11 bytes length) slice
of my file.
However, this is probably not the "usual" behaviour :
in this case, it seems that the correct implementation
would be to first gzip the whole file, then return bytes
10-20 from the gzipped data.
Here what apache 2.2.22 returns (although apache is
not the reference implementation on that point) :
Content-Encoding: gzip
Content-Range: bytes 10-20/120
Content-Length: 11
Note the total content length (120), way smaller than my file length.
The 11 bytes can not be decoded by gunzip, as they are in the middle
of the gzip stream.
From what I have understood of http/1.1, this way of gzipping the range
would be conformant if I used the transfer encoding headers :
TE: gzip, chunked
and if server answered with the following header :
Transfer-Encoding: gzip, chunked
Here are some relevant links :
http://forum.nginx.org/read.php?2,209738,209817
http://stackapps.com/questions/916/why-content-encoding-gzip-rather-than-transfer-encoding-gzip
https://bugzilla.mozilla.org/show_bug.cgi?id=68517
https://issues.apache.org/bugzilla/show_bug.cgi?id=52860
It appears that the Transfer-Encoding support is very poor
in standard browsers (and I am afraid also for http proxys).
But the GF3 response will probably confuse a conformant
http proxy.
Thank you for your attention,
M. Maison