The plot thickens ... seems this only happens with default
HeapMemoryManager, with Pooled/ByteBuffer managers things look good.
Is there and inherrent incompatability between ThreadLocal caching of
buffer used by PooledByteBuffer and the write()->onComplete()->write()
chain used for streaming response chunks?
Wondering if there is an alternative way to do response chunking, or
if default MemoryManager should be changed in order for this typical
use case to be supported correctly by Grizzly out of the box...
Dan
On Tue, Jan 6, 2015 at 10:47 PM, Daniel Feist <dfeist_at_gmail.com> wrote:
> Hi,
>
> I've found what I think is something wierd going on with the http
> download example included in the source tree. If you run it with a
> payload larger than chunk size of 1KB then and compare the source file
> (read by Server), and the target file (saved to disk by the Client)
> they are not the same. The file recieved has some content but then is
> nearly all 0's. I've tried this with a number of different payload
> sizes and see rougly the same. I'm still looking into the root
> cause, I'll let you know what I find, but it would be interesting to
> see if anyone else has seen this before or has any thoughts on where
> to look.
>
> Is this the recommended approach for sending chunked responses with
> Grizzly? Or rather should org.glassfish.grizzly.http.io.OutputBuffer
> be used somehow?
>
> thanks,
> Dan