users@grizzly.java.net

Re: Upload a large file without oom with Grizzly

From: Ryan Lubke <ryan.lubke_at_oracle.com>
Date: Fri, 13 Sep 2013 08:44:53 -0700

Sébastien Lorber wrote:
> I have retrieved your last change (connection discrimination) *and it
> seems to be a great success.*
> I don't really understand your fix but I guess the feeding was
> initialized for the wrong connection or something...
The problem was my rushing :) But yes, that's the gist of it (that's
why you were seeing the listener invoked multiple times).
>
> All the problems of inputStream consumed earlier seems do have
> vanished, and I'm able to upload concurrently 60 files of 10mo, for a
> footprint of about 200mo, using pretty large buffers.
>
> For the first time I have the old test that doesn't pass (with large
> heap) because it seems a lot more sensitive for connection/request
> timeouts, which I already set pretty big)
>
>
>
> I'll check the behavior according to my thread configuration which is
> currently:
>
> ThreadPoolConfig workerThreadPoolConfig =
> ThreadPoolConfig.defaultConfig();
> workerThreadPoolConfig.setCorePoolSize(20);
> workerThreadPoolConfig.setMaxPoolSize(100);
> tcpnioTransport.setWorkerThreadPoolConfig(workerThreadPoolConfig);
Great! Let us know how it goes.
>
>
>
>
>
> 2013/9/13 Sébastien Lorber <lorber.sebastien_at_gmail.com
> <mailto:lorber.sebastien_at_gmail.com>>
>
> Ok thanks
>
> On our side we will try to deploy me integration tests to another
> server witch has a better connectivity from the API, because from
> where I run them it's pretty slow actually (perhaps you should
> consider that in your testing framework?)
>
> I'll also check how this behave when I upload the same amount of
> concurrent files with our previous code (and a large heap!)
> because for now I've done most of my tests with 5-10 threads while
> now I test with 20-50 threads as it starts to work better :)
> Perhaps the remote API is under load or something...
>
>
> 2013/9/12 Ryan Lubke <ryan.lubke_at_oracle.com
> <mailto:ryan.lubke_at_oracle.com>>
>
> Okay no problem.
>
> Sébastien Lorber wrote:
>> This won't be possible because these are not unit tests but
>> integration tests.
>>
>> We test this multipart upload against a real external API
>> which require certificates and is hosted by our customer. We
>> do not have any mock.
>> As you can see, I don't even publish my app package name in
>> the stacks because I was told to not do so...
> Yes, I saw that- Assumed it was just a simple testing
> framework because of it.
>
> We'll roll our own and get back with you asap.
>>
>>
>> 2013/9/12 Ryan Lubke <ryan.lubke_at_oracle.com
>> <mailto:ryan.lubke_at_oracle.com>>
>>
>> Would it be possible to share you testing framework?
>>
>> Sébastien Lorber wrote:
>>> Thanks,
>>>
>>>
>>>
>>> So in the end it seems I can run concurrent uploads. But
>>> it is kind of limited, with 10 threads it works fine,
>>> and with 20 threads it seems the flush call is sometimes
>>> done too early because the feed method is not blocking,
>>> which produces OOM in some cases. The threshold seems to
>>> be around 15.
>>>
>>>
>>> I found a solution which seems to work in every situation:
>>> - Use a worker thread pool > number of concurrent requests
>>> - Wait a bit before starting the feeding.
>>>
>>>
>>>
>>> My waiting code is:
>>>
>>> private volatile boolean alreadyFlushed = false;
>>> @Override
>>> public synchronized void flush() {
>>> if ( alreadyFlushed ) {
>>> return;
>>> }
>>> try { Thread.sleep(1000); } catch ( Exception e ) {
>>> e.printStackTrace(); }
>>> startFeeding();
>>> alreadyFlushed = true;
>>> }
>>>
>>>
>>>
>>> With a 35 worker thread pool and this waiting code, I
>>> can run**30 concurrent uploads of large files, without
>>> having performed any request before.
>>>
>>>
>>>
>>> If I remove any of these 2 tricks, I get problems.
>>>
>>>
>>>
>>> *1) If I put a low worker thread number:*
>>> I mean, for 30 concurrent uploads, it starts to fail
>>> even for something like 31/32 worker threads
>>>
>>> Sometime I get timeout stacks:
>>>
>>> Caused by: java.util.concurrent.TimeoutException:
>>> Timeout exceeded
>>> at
>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:528)
>>> at
>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:361)
>>> at
>>> org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:383)
>>> at
>>> org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362)
>>> at
>>> org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>
>>>
>>> Sometime I get OOM stacks:
>>>
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>> at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>>> at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>>> at
>>> org.glassfish.grizzly.ssl.SSLUtils.allocateOutputBuffer(SSLUtils.java:342)
>>> at
>>> org.glassfish.grizzly.ssl.SSLBaseFilter$2.grow(SSLBaseFilter.java:117)
>>> at
>>> org.glassfish.grizzly.ssl.SSLConnectionContext.ensureBufferSize(SSLConnectionContext.java:392)
>>> at
>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrap(SSLConnectionContext.java:272)
>>> at
>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrapAll(SSLConnectionContext.java:227)
>>> at
>>> org.glassfish.grizzly.ssl.SSLBaseFilter.wrapAll(SSLBaseFilter.java:405)
>>> at
>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleWrite(SSLBaseFilter.java:320)
>>> at
>>> org.glassfish.grizzly.ssl.SSLFilter.accurateWrite(SSLFilter.java:263)
>>> at
>>> org.glassfish.grizzly.ssl.SSLFilter.handleWrite(SSLFilter.java:143)
>>> at
>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleWrite(GrizzlyAsyncHttpProvider.java:2500)
>>> at
>>> org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>> at
>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>> at
>>> org.glassfish.grizzly.filterchain.FilterChainContext$1.run(FilterChainContext.java:196)
>>> at
>>> org.glassfish.grizzly.filterchain.FilterChainContext.resume(FilterChainContext.java:220)
>>> at
>>> org.glassfish.grizzly.ssl.SSLFilter$SSLHandshakeContext.completed(SSLFilter.java:383)
>>> at
>>> org.glassfish.grizzly.ssl.SSLFilter.notifyHandshakeComplete(SSLFilter.java:278)
>>> at
>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleRead(SSLBaseFilter.java:275)
>>> at
>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleRead(GrizzlyAsyncHttpProvider.java:2490)
>>> at
>>> org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>> at
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>> at
>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>> at
>>> org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:546)
>>> at
>>> org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
>>>
>>>
>>> Another very strange behavior I have is that some
>>> FileInputStream that are used for the requests are read
>>> prematurely for a reason I can't understand.
>>> Not all, but some of the 30 concurrent uploads. I can
>>> see just after the firing of the 30 upload requests in
>>> my log:
>>>
>>> [Grizzly(1)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_61187ada after
>>> having read 10632755 bytes
>>> [Grizzly(15)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_1ba83dc after
>>> having read 10632755 bytes
>>> [Grizzly(13)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_7c26e166 after
>>> having read 10632755 bytes
>>> [Grizzly(10)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_5c982f37 after
>>> having read 10632755 bytes
>>> [Grizzly(6)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_b43f35f after
>>> having read 10632755 bytes
>>> [Grizzly(4)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_1a1ee7c0 after
>>> having read 10632755 bytes
>>> [Grizzly(9)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_6300fba5 after
>>> having read 10632755 bytes
>>> [Grizzly(16)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_49d88b78 after
>>> having read 10632755 bytes
>>> [Grizzly(4)] INFO com....DocumentService$1:276 - Will
>>> close file stream java.io.FileInputStream_at_3b7b65f8 after
>>> having read 598016 bytes
>>> --asept. 12, 2013 5:54:14 PM
>>> org.glassfish.grizzly.filterchain.DefaultFilterChain execute
>>> Avertissement: Exception during FilterChain execution
>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>
>>> This close operation on the file stream is supposed to
>>> happen after more than 30 seconds because the connection
>>> is slow and the file is 10M
>>>
>>> This generally leads to OOM errors, it seems the whole
>>> file is mounted in memory for unknown reason in some
>>> cases, but only when there are not enough worker threads.
>>>
>>>
>>> It's not a big deal, I guess it is "normal" to use at
>>> least 30 threads to run 30 concurrent uploads :) It's
>>> just that when this happens it's not easy to understand
>>> why we have OOMs and how it relates to the AHC grizzly
>>> thread pool.
>>>
>>>
>>>
>>> *2) If I remove that sleep(1000) before feeding or use a
>>> lower number*
>>> This leads to the same kind of behavior, with the
>>> InputStream being read prematurely.
>>>
>>>
>>>
>>>
>>> So I guess, the problem is that when the flush() method
>>> is called it seems the connection always return
>>> canWrite=true, so the blocking never happen until
>>> something happen and the connection starts to block the
>>> feeding
>>>
>>> I tried to debug that and here's the connection queue state:
>>>
>>> * connectionQueue =
>>> {org.glassfish.grizzly.asyncqueue.TaskQueue_at_4804}
>>> * isClosed = true
>>> * queue =
>>> {java.util.concurrent.ConcurrentLinkedQueue_at_5106}
>>> size = 0
>>> * currentElement =
>>> {java.util.concurrent.atomic.AtomicReference_at_5107}"null"
>>> * spaceInBytes =
>>> {java.util.concurrent.atomic.AtomicInteger_at_5108}"291331"
>>> * maxQueueSizeHolder =
>>> {org.glassfish.grizzly.nio.NIOConnection$1_at_5109}
>>> * writeHandlersCounter =
>>> {java.util.concurrent.atomic.AtomicInteger_at_5110}"0"
>>> * writeHandlersQueue =
>>> {java.util.concurrent.ConcurrentLinkedQueue_at_5111}
>>> size = 0
>>>
>>>
>>>
>>> Do you have any idea what could be the problem?
>>>
>>> This happens when flush() is called from both init() and
>>> handshakeComplete() methods
>>>
>>>
>>> 2013/9/12 Ryan Lubke <ryan.lubke_at_oracle.com
>>> <mailto:ryan.lubke_at_oracle.com>>
>>>
>>>
>>>
>>> Sébastien Lorber wrote:
>>>>
>>>>
>>>>
>>>> I noticed something strange.
>>>> On the FileInputStream I have, I've added a log on
>>>> the close() of the stream which is called once the
>>>> whole file has been read to be sent to the feeder.
>>>>
>>>> 1) If I perform a request (ie my session init
>>>> request in the previous discussions) before doing
>>>> my multipart upload, the thread that does execute
>>>> the feeding is the thread that fires the request,
>>>> and not a Grizzly worker.
>>>> [Thread-30] INFO Will close file stream
>>>> java.io.FileInputStream_at_27fe4315 after having read
>>>> 1440304 bytes
>>>>
>>>>
>>>> 2) If I don't do any request before firing the
>>>> multipart upload, the thread that does the feeding
>>>> is a Grizzly threadpool worker thread:
>>>> [Grizzly(22)] INFO Will close file stream
>>>> java.io.FileInputStream_at_59ac4002 after having read
>>>> 1440304 bytes
>>>>
>>>>
>>>> Is this normal? I would expect a worker thread to
>>>> always be used, and the main applicative thread
>>>> that performs the request to never be blocking.
>>>> (but it's not such an issue for me, we don't have a
>>>> reactive non-blocking app anyway)
>>> It's currently expected behavior. Will need to
>>> re-evaluate this based on the new semantics of the
>>> FeedableBodyGenerator.
>>>>
>>>>
>>>> On the 1st case, here's the stacktrace when the
>>>> feed method is called:
>>>>
>>>> *at
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.initializeAsynchronousTransfer(FeedableBodyGenerator.java:178)*
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$BodyGeneratorBodyHandler.doHandle(GrizzlyAsyncHttpProvider.java:2210)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider.sendRequest(GrizzlyAsyncHttpProvider.java:564)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$AsyncHttpClientFilter.sendAsGrizzlyRequest(GrizzlyAsyncHttpProvider.java:913)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$AsyncHttpClientFilter.handleWrite(GrizzlyAsyncHttpProvider.java:795)
>>>> at
>>>> org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>>> at
>>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.write(DefaultFilterChain.java:437)
>>>> at
>>>> org.glassfish.grizzly.nio.NIOConnection.write(NIOConnection.java:387)
>>>> at
>>>> org.glassfish.grizzly.nio.NIOConnection.write(NIOConnection.java:361)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider.execute(GrizzlyAsyncHttpProvider.java:307)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$1.completed(GrizzlyAsyncHttpProvider.java:224)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$1.completed(GrizzlyAsyncHttpProvider.java:210)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$ConnectionManager.doAsyncTrackedConnection(GrizzlyAsyncHttpProvider.java:2289)
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider.execute(GrizzlyAsyncHttpProvider.java:244)
>>>> *at
>>>> com.ning.http.client.AsyncHttpClient.executeRequest(AsyncHttpClient.java:534)*
>>>>
>>>> So it seems this happen when the handshake has
>>>> already been done when
>>>> *initializeAsynchronousTransfer* is called, so that
>>>> we do not go through the HandshakeListener
>>>>
>>>>
>>>> Notice that this case seems to also happen in a few
>>>> threads on the 2nd case, but most of the threads
>>>> are Grizzly workers.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 2013/9/12 Sébastien Lorber
>>>> <lorber.sebastien_at_gmail.com
>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>
>>>>
>>>>
>>>> Hi.
>>>>
>>>>
>>>> Thanks, it seems to work.
>>>>
>>>> I would suggest to throw IOException on the
>>>> flush method since the feed method is supposed
>>>> to be called here
>>>>
>>>>
>>>> My implementation of flush is:
>>>>
>>>> @Override
>>>> public void flush() {
>>>> Part[] partsArray = parts.toArray(new
>>>> Part[parts.size()]);
>>>> try ( OutputStream outputStream =
>>>> createFeedingOutputStream() ) {
>>>>
>>>> Part.sendParts(outputStream,partsArray,multipartBoundary);
>>>> } catch (Exception e) {
>>>> throw new IllegalStateException("Unable
>>>> to feed the FeedableBodyGenerator",e);
>>>> }
>>>> }
>>>>
>>>> Is this correct? The OutputStream redirects the
>>>> bytes written to the feed(Buffer) method
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> There seems to be some concurrency issue.
>>>> Because the upload of 1 file seems fine, but
>>>> when using multiple threads, I often get the
>>>> following stack:
>>>> Caused by: java.io.IOException: Stream Closed
>>>> at java.io.FileInputStream.readBytes(Native Method)
>>>> at
>>>> java.io.FileInputStream.read(FileInputStream.java:242)
>>>> at
>>>> com.google.common.io.CountingInputStream.read(CountingInputStream.java:62)
>>>> at
>>>> java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>> at
>>>> java.io.FilterInputStream.read(FilterInputStream.java:107)
>>>> at
>>>> com.ning.http.multipart.FilePart.sendData(FilePart.java:178)
>>>> at com.ning.http.multipart.Part.send(Part.java:331)
>>>> at
>>>> com.ning.http.multipart.Part.sendParts(Part.java:397)
>>>>
>>>>
>>>> This is because the flush() method is called
>>>> multiple times for the same request on some cases.
>>>> I guess this is not supposed to happen.
>>>> What I understand is that the flush() method is
>>>> supposed to be called only once.
>>>>
>>>>
>>>> Using debug logging breakpoints I get the
>>>> following:
>>>>
>>>> myapp--api-test 12/09/2013-12:20:46.042 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 0 started
>>>> myapp--api-test 12/09/2013-12:20:46.042 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 1 started
>>>> myapp--api-test 12/09/2013-12:20:46.043 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 2 started
>>>> myapp--api-test 12/09/2013-12:20:46.043 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 3 started
>>>> myapp--api-test 12/09/2013-12:20:46.043 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 4 started
>>>> myapp--api-test 12/09/2013-12:20:46.044 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 5 started
>>>> myapp--api-test 12/09/2013-12:20:46.044 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 6 started
>>>> myapp--api-test 12/09/2013-12:20:46.044 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 7 started
>>>> myapp--api-test 12/09/2013-12:20:46.045 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 8 started
>>>> myapp--api-test 12/09/2013-12:20:46.045 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 9 started
>>>> myapp--api-test 12/09/2013-12:20:46.045 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 10 started
>>>> myapp--api-test 12/09/2013-12:20:46.046 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 11 started
>>>> myapp--api-test 12/09/2013-12:20:46.047 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 12 started
>>>> myapp--api-test 12/09/2013-12:20:46.048 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 13 started
>>>> myapp--api-test 12/09/2013-12:20:46.049 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 14 started
>>>> myapp--api-test 12/09/2013-12:20:46.049 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 15 started
>>>> myapp--api-test 12/09/2013-12:20:46.050 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 16 started
>>>> myapp--api-test 12/09/2013-12:20:46.050 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 17 started
>>>> myapp--api-test 12/09/2013-12:20:46.051 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 18 started
>>>> myapp--api-test 12/09/2013-12:20:46.051 [] []
>>>> [main] INFO
>>>> com.myapp.perf.DocumentUploadPerfIntegrationTest:77
>>>> - Thread 19 started
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_417de6ff
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_6c0d6ef7
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_7799b411
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_1c940409
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_480f9510
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_3e888183
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_17840db8
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_2cbad94b
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_64c0a4ae
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_102873a6
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_3d5d9ee8
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_51557949
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_6f95de2f
>>>> Adding handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_346d784c
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_7799b411
>>>> *Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_5befaa07*
>>>> *Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_5befaa07*
>>>> *Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_5befaa07*
>>>> *Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_102873a6*
>>>> *Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_102873a6*
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_346d784c
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_64c0a4ae
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_674735a8
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_64c0a4ae
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_480f9510
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_6c0d6ef7
>>>> Completing and removing handshake listener for
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator_at_1daf8fd8
>>>>
>>>>
>>>> As you can see the same HandshakeListener seems
>>>> to be called multiple times for the same
>>>> FeedableBodyGenerator
>>>>
>>>> When this happens, the stack is:
>>>>
>>>> at
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator$1.onComplete(FeedableBodyGenerator.java:198)
>>>> ~[async-http-client-1.7.20-SNAPSHOT.jar:na]
>>>> at
>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.notifyHandshakeComplete(SSLBaseFilter.java:880)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.ssl.SSLFilter.notifyHandshakeComplete(SSLFilter.java:282)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleRead(SSLBaseFilter.java:275)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleRead(GrizzlyAsyncHttpProvider.java:2490)
>>>> ~[async-http-client-1.7.20-SNAPSHOT.jar:na]
>>>> at
>>>> org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:546)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>> at
>>>> org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>
>>>>
>>>>
>>>>
>>>> So I tried with the following code:
>>>>
>>>> private boolean alreadyFlushed = false;
>>>> @Override
>>>> public synchronized void flush() {
>>>> if ( alreadyFlushed ) {
>>>> return;
>>>> }
>>>> startFeeding();
>>>> alreadyFlushed = true;
>>>> }
>>>>
>>>> It works fine when upload small files.
>>>>
>>>> But with larger files, I often get
>>>> TimeoutException stacks for some of the threads:
>>>>
>>>> Caused by: java.util.concurrent.TimeoutException
>>>> at
>>>> org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerGet(SafeFutureImpl.java:367)
>>>> at
>>>> org.glassfish.grizzly.impl.SafeFutureImpl.get(SafeFutureImpl.java:274)
>>>> at
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator$BaseFeeder.block(FeedableBodyGenerator.java:349)
>>>> at
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator$BaseFeeder.blockUntilQueueFree(FeedableBodyGenerator.java:339)
>>>> at
>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator$BaseFeeder.feed(FeedableBodyGenerator.java:306)
>>>>
>>>>
>>>>
>>>>
>>>> Did I miss something?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 2013/9/12 Ryan Lubke <ryan.lubke_at_oracle.com
>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>
>>>> Committed another small change. Please
>>>> make sure you're at the latest when you build.
>>>>
>>>> -rl
>>>>
>>>>
>>>> Ryan Lubke wrote:
>>>>> Okay, I've committed another set of
>>>>> refactorings to the FeedableBodyGenerator.
>>>>>
>>>>> For your use case, you should extend
>>>>> FeedableBodyGenerator.SimpleFeeder.
>>>>>
>>>>> Let me know if you run into issues.
>>>>>
>>>>>
>>>>>
>>>>> Sébastien Lorber wrote:
>>>>>> Yes I think it would, so that I can feed
>>>>>> the queue at once
>>>>>>
>>>>>> One thread will be locked during the
>>>>>> feeding for nothing but it's not a real
>>>>>> problem in my usecase.
>>>>>>
>>>>>>
>>>>>> 2013/9/10 Ryan Lubke
>>>>>> <ryan.lubke_at_oracle.com
>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>
>>>>>> Would having a different listener
>>>>>> that will be notified once async
>>>>>> transferring has been started work
>>>>>> better for you?
>>>>>>
>>>>>> Something like:
>>>>>>
>>>>>> onAsyncTransferInitiated() {
>>>>>> // invoke your feed method
>>>>>> }
>>>>>>
>>>>>> ?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Sébastien Lorber wrote:
>>>>>>> Unfortunatly I won't be able to use
>>>>>>> the Feeder non-blocking stuff for
>>>>>>> now, because of how the multipart
>>>>>>> request in handled in AHC
>>>>>>>
>>>>>>>
>>>>>>> Here's my feeding method:
>>>>>>>
>>>>>>> public void feed() throws
>>>>>>> IOException {
>>>>>>> Part[] partsArray =
>>>>>>> parts.toArray(new Part[parts.size()]);
>>>>>>> try ( OutputStream outputStream
>>>>>>> = createFeedingOutputStream() ) {
>>>>>>>
>>>>>>> Part.sendParts(outputStream,partsArray,multipartBoundary);
>>>>>>> } catch (Exception e) {
>>>>>>> throw new
>>>>>>> IllegalStateException("Unable to
>>>>>>> feed the FeedableBodyGenerator",e);
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>>
>>>>>>> As you can see, the multipart Parts
>>>>>>> array can only be pushed to the
>>>>>>> OutputStream, I don't have any way
>>>>>>> to "pull" the data when the
>>>>>>> canFeed() method is triggered.
>>>>>>>
>>>>>>>
>>>>>>> But I've seen that there's
>>>>>>> a com.ning.http.multipart.MultipartBody#read
>>>>>>> that seems to provide a memory
>>>>>>> efficient way to pull data from a
>>>>>>> Multipart body...
>>>>>>>
>>>>>>> Should see what I come up with this
>>>>>>>
>>>>>>>
>>>>>>> 2013/9/10 Sébastien Lorber
>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>
>>>>>>> It seems the Feeder is highly
>>>>>>> recommended but not mandatory so
>>>>>>> I tried without.
>>>>>>>
>>>>>>> With my existing code it seems
>>>>>>> there is a synchronization problem.
>>>>>>>
>>>>>>>
>>>>>>> The feeding threads get locked
>>>>>>> to the prematureFeed.get();
>>>>>>>
>>>>>>> So the Grizzly kernel threads
>>>>>>> are unable to acquire the lock
>>>>>>> required to enter
>>>>>>> the initializeAsynchronousTransfer
>>>>>>> method
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Will try with an implementation
>>>>>>> of Feeder
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2013/9/10 Sébastien Lorber
>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>
>>>>>>> Hmmm it seems I have a
>>>>>>> problem with one of your
>>>>>>> maven plugins. I'll try to
>>>>>>> bypass it, but for info:
>>>>>>>
>>>>>>> ➜ ahc2 git:(ahc-1.7.x) mvn
>>>>>>> clean install
>>>>>>> [WARNING]
>>>>>>> [WARNING] Some problems were
>>>>>>> encountered while building
>>>>>>> the effective settings
>>>>>>> [WARNING]
>>>>>>> 'profiles.profile[default].repositories.repository.id
>>>>>>> <http://repositories.repository.id>'
>>>>>>> must be unique but found
>>>>>>> duplicate repository with id
>>>>>>> fullsix-maven-repository @
>>>>>>> /home/slorber/.m2/settings.xml
>>>>>>> [WARNING]
>>>>>>> [INFO] Scanning for projects...
>>>>>>> [INFO]
>>>>>>> [INFO]
>>>>>>> ------------------------------------------------------------------------
>>>>>>> [INFO] Building Asynchronous
>>>>>>> Http Client 1.7.20-SNAPSHOT
>>>>>>> [INFO]
>>>>>>> ------------------------------------------------------------------------
>>>>>>> [INFO]
>>>>>>> [INFO] ---
>>>>>>> maven-clean-plugin:2.4.1:clean
>>>>>>> (default-clean) @
>>>>>>> async-http-client ---
>>>>>>> [INFO]
>>>>>>> [INFO] ---
>>>>>>> maven-enforcer-plugin:1.0-beta-1:enforce
>>>>>>> (enforce-maven) @
>>>>>>> async-http-client ---
>>>>>>> [INFO]
>>>>>>> [INFO] ---
>>>>>>> maven-enforcer-plugin:1.0-beta-1:enforce
>>>>>>> (enforce-versions) @
>>>>>>> async-http-client ---
>>>>>>> [INFO]
>>>>>>> [INFO] ---
>>>>>>> maven-resources-plugin:2.4.3:resources
>>>>>>> (default-resources) @
>>>>>>> async-http-client ---
>>>>>>> [INFO] Using 'UTF-8'
>>>>>>> encoding to copy filtered
>>>>>>> resources.
>>>>>>> [INFO] skip non existing
>>>>>>> resourceDirectory
>>>>>>> /home/slorber/Bureau/ahc2/src/main/resources
>>>>>>> [INFO]
>>>>>>> [INFO] ---
>>>>>>> maven-compiler-plugin:2.3.2:compile
>>>>>>> (default-compile) @
>>>>>>> async-http-client ---
>>>>>>> [INFO] Compiling 158 source
>>>>>>> files to
>>>>>>> /home/slorber/Bureau/ahc2/target/classes
>>>>>>> [INFO]
>>>>>>> *[INFO] ---
>>>>>>> animal-sniffer-maven-plugin:1.6:check
>>>>>>> (check-java-1.5-compat) @
>>>>>>> async-http-client ---*
>>>>>>> *[INFO] Checking unresolved
>>>>>>> references to
>>>>>>> org.codehaus.mojo.signature:java15:1.0*
>>>>>>> *[ERROR] Undefined
>>>>>>> reference:
>>>>>>> java/io/IOException.<init>(Ljava/lang/Throwable;)V
>>>>>>> in
>>>>>>> /home/slorber/Bureau/ahc2/target/classes/com/ning/http/client/providers/grizzly/FeedableBodyGenerator.class*
>>>>>>> [INFO]
>>>>>>> ------------------------------------------------------------------------
>>>>>>> [INFO] BUILD FAILURE
>>>>>>> [INFO]
>>>>>>> ------------------------------------------------------------------------
>>>>>>> [INFO] Total time: 8.747s
>>>>>>> [INFO] Finished at: Tue Sep
>>>>>>> 10 11:25:41 CEST 2013
>>>>>>> [INFO] Final Memory: 30M/453M
>>>>>>> [INFO]
>>>>>>> ------------------------------------------------------------------------
>>>>>>> *[ERROR] Failed to execute
>>>>>>> goal
>>>>>>> org.codehaus.mojo:animal-sniffer-maven-plugin:1.6:check
>>>>>>> (check-java-1.5-compat) on
>>>>>>> project async-http-client:
>>>>>>> Signature errors found.
>>>>>>> Verify them and put
>>>>>>> @IgnoreJRERequirement on
>>>>>>> them. -> [Help 1]*
>>>>>>> [ERROR]
>>>>>>> [ERROR] To see the full
>>>>>>> stack trace of the errors,
>>>>>>> re-run Maven with the -e switch.
>>>>>>> [ERROR] Re-run Maven using
>>>>>>> the -X switch to enable full
>>>>>>> debug logging.
>>>>>>> [ERROR]
>>>>>>> [ERROR] For more information
>>>>>>> about the errors and
>>>>>>> possible solutions, please
>>>>>>> read the following articles:
>>>>>>> [ERROR] [Help 1]
>>>>>>> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2013/9/10 Sébastien Lorber
>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>
>>>>>>> Ok thank you, I'll try
>>>>>>> to implement that today
>>>>>>> and will give you my
>>>>>>> feedback :)
>>>>>>>
>>>>>>>
>>>>>>> 2013/9/10 Ryan Lubke
>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>
>>>>>>> Okay,
>>>>>>>
>>>>>>> I've committed my
>>>>>>> initial changes to
>>>>>>> the AHC repository.
>>>>>>> Here's a summary of
>>>>>>> the changes:
>>>>>>>
>>>>>>>
>>>>>>> *Improvements to
>>>>>>> the
>>>>>>> FeedableBodyGenerator (Grizzly's).
>>>>>>> - Don't allow
>>>>>>> queueing of data
>>>>>>> before
>>>>>>> initiateAsyncTransfer has
>>>>>>> been invoked. In low
>>>>>>> memory
>>>>>>> heaps, this could
>>>>>>> lead to an OOM if
>>>>>>> the source is
>>>>>>> feeding too fast.
>>>>>>> The new behavior is to
>>>>>>> block until
>>>>>>> initiateAsyncTransfer is
>>>>>>> called, at which
>>>>>>> time the blocked
>>>>>>> thread may proceed with
>>>>>>> the feed operation.
>>>>>>> - Introduce the
>>>>>>> concept of a Feeder.
>>>>>>> Implementations are
>>>>>>> responsible, at a
>>>>>>> high level, for:
>>>>>>> + letting the
>>>>>>> provider know that
>>>>>>> data is available to
>>>>>>> be fed without blocking
>>>>>>> + allowing the
>>>>>>> registration of a
>>>>>>> callback that the
>>>>>>> Feeder
>>>>>>> implementation may
>>>>>>> invoke
>>>>>>> to signal that more
>>>>>>> data is available,
>>>>>>> if it wasn't
>>>>>>> available at a
>>>>>>> previous point in time.
>>>>>>> - When using a
>>>>>>> Feeder with a secure
>>>>>>> request, the SSL
>>>>>>> handshake will be
>>>>>>> kicked off by the
>>>>>>> initiateAsyncTransfer call,
>>>>>>> but feeding of data
>>>>>>> will not occur until
>>>>>>> the handshake is
>>>>>>> complete.
>>>>>>> This is necessary as
>>>>>>> the SSLFilter will
>>>>>>> queue up all writes
>>>>>>> until the handshake
>>>>>>> is complete,
>>>>>>> and currently, the
>>>>>>> buffer isn't tied in
>>>>>>> with the transport
>>>>>>> flow control mechanism.
>>>>>>> NOTE: This new SSL
>>>>>>> behavior is not
>>>>>>> currently applied
>>>>>>> when invoking the
>>>>>>> feed() method
>>>>>>> outside the context
>>>>>>> of a Feeder. Still
>>>>>>> need to address that.
>>>>>>> - Exposed
>>>>>>> configuration of the
>>>>>>> async write queue
>>>>>>> limit through the
>>>>>>> FeedableBodyGenerator.
>>>>>>> This is an
>>>>>>> improvement on using
>>>>>>> a
>>>>>>> TransportCustomizer
>>>>>>> as any configuration
>>>>>>> there is
>>>>>>> transport-wide, and
>>>>>>> therefor applied to
>>>>>>> all Connections. By
>>>>>>> exposing it here,
>>>>>>> each feeder
>>>>>>> may have a different
>>>>>>> byte limit.
>>>>>>> - Improved
>>>>>>> documentation for
>>>>>>> this class*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I recommend reading
>>>>>>> through the javadoc
>>>>>>> comments in the
>>>>>>> source [1] for
>>>>>>> FeedableBodyGenerator (comments
>>>>>>> welcome).
>>>>>>> Additionally, I
>>>>>>> would re-work your
>>>>>>> code to leverage the
>>>>>>> Feeder instead of
>>>>>>> calling feed() directly.
>>>>>>>
>>>>>>> If you have issues
>>>>>>> implementing Feeder,
>>>>>>> do let us know.
>>>>>>>
>>>>>>> If you have
>>>>>>> additional
>>>>>>> questions, again,
>>>>>>> let us know.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> -rl
>>>>>>>
>>>>>>> [1]
>>>>>>> https://github.com/AsyncHttpClient/async-http-client/blob/ahc-1.7.x/src/main/java/com/ning/http/client/providers/grizzly/FeedableBodyGenerator.java
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Ryan Lubke wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> Sébastien Lorber
>>>>>>>> wrote:
>>>>>>>>> So in the end I've
>>>>>>>>> end up with an
>>>>>>>>> implementation
>>>>>>>>> that's working for me.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I think there are
>>>>>>>>> 2 bugs:
>>>>>>>>>
>>>>>>>>> 1) The bytes can
>>>>>>>>> accumulate in the
>>>>>>>>> FeedableBodyGenerator
>>>>>>>>> queue if the
>>>>>>>>> initialize(ctx)
>>>>>>>>> method is not
>>>>>>>>> called fast enough.
>>>>>>>>> This can be solved
>>>>>>>>> by using a
>>>>>>>>> BlockingQueue of
>>>>>>>>> size 1 and the
>>>>>>>>> put() method.
>>>>>>>>>
>>>>>>>>> 2) Once the
>>>>>>>>> context is
>>>>>>>>> injected, the
>>>>>>>>> FeedableBodyGenerator
>>>>>>>>> flushes the queue.
>>>>>>>>> The matter is that
>>>>>>>>> if the connection
>>>>>>>>> is new, not warmed
>>>>>>>>> up by a previous
>>>>>>>>> request, then the
>>>>>>>>> SSL handshake is
>>>>>>>>> not done yet, and
>>>>>>>>> it seems that the
>>>>>>>>> bytes are
>>>>>>>>> accumulated in
>>>>>>>>> some part of the
>>>>>>>>> SSL filter which
>>>>>>>>> doesn't deliver
>>>>>>>>> them to the
>>>>>>>>> connection until
>>>>>>>>> the handshake has
>>>>>>>>> completed,
>>>>>>>>> so c.canWrite()
>>>>>>>>> continues to
>>>>>>>>> return true.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have replaced
>>>>>>>>> some part of the
>>>>>>>>> FeedableBodyGenerator
>>>>>>>>> to test this and
>>>>>>>>> it works pretty
>>>>>>>>> fine. See what I
>>>>>>>>> have changed:
>>>>>>>>>
>>>>>>>>> 1)
>>>>>>>>> private final
>>>>>>>>> BlockingQueue<BodyPart>
>>>>>>>>> queue = new
>>>>>>>>> LinkedBlockingQueue<BodyPart>(1);
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2)
>>>>>>>>> public void
>>>>>>>>> feed(final Buffer
>>>>>>>>> buffer, final
>>>>>>>>> boolean last)
>>>>>>>>> throws IOException {
>>>>>>>>> try {
>>>>>>>>>
>>>>>>>>> queue.put(new
>>>>>>>>> BodyPart(buffer,
>>>>>>>>> last));
>>>>>>>>> } catch
>>>>>>>>> (InterruptedException
>>>>>>>>> e) {
>>>>>>>>> throw
>>>>>>>>> new
>>>>>>>>> RuntimeException(e);
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> queueSize.incrementAndGet();
>>>>>>>>> if
>>>>>>>>> (context != null) {
>>>>>>>>>
>>>>>>>>> blockUntilConnectionIsReadyToWrite(context);
>>>>>>>>>
>>>>>>>>> flushQueue(true);
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> private void
>>>>>>>>> blockUntilConnectionIsReadyToWrite(FilterChainContext
>>>>>>>>> fcc) {
>>>>>>>>> if (
>>>>>>>>> !connectionIsReadyToWrite(fcc)
>>>>>>>>> ) {
>>>>>>>>> while (
>>>>>>>>> !connectionIsReadyToWrite(fcc)
>>>>>>>>> ) {
>>>>>>>>> try {
>>>>>>>>> Thread.sleep(10);
>>>>>>>>> } catch (
>>>>>>>>> Exception e ) {
>>>>>>>>> throw new
>>>>>>>>> RuntimeException(e); }
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> private
>>>>>>>>> boolean
>>>>>>>>> connectionIsReadyToWrite(FilterChainContext
>>>>>>>>> fcc) {
>>>>>>>>> Connection
>>>>>>>>> connection =
>>>>>>>>> fcc.getConnection();
>>>>>>>>> SSLEngine
>>>>>>>>> sslEngine =
>>>>>>>>> SSLUtils.getSSLEngine(connection);
>>>>>>>>> return
>>>>>>>>> sslEngine != null
>>>>>>>>> &&
>>>>>>>>> !SSLUtils.isHandshaking(sslEngine);
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> What do you think?
>>>>>>>>
>>>>>>>> We had come to
>>>>>>>> similar conclusions
>>>>>>>> on this end. I'm
>>>>>>>> still working
>>>>>>>> through testing the
>>>>>>>> idea I mentioned
>>>>>>>> previously (took
>>>>>>>> longer than I
>>>>>>>> expected - sorry).
>>>>>>>> I hope to have
>>>>>>>> something for you
>>>>>>>> to test very soon.
>>>>>>>>
>>>>>>>> Note that it will
>>>>>>>> be taking the above
>>>>>>>> into account as well.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2013/9/5 Sébastien
>>>>>>>>> Lorber
>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have tried
>>>>>>>>> to put a while
>>>>>>>>> ( context ==
>>>>>>>>> null )
>>>>>>>>> Thread.sleep
>>>>>>>>> but it doesn't
>>>>>>>>> seem to work,
>>>>>>>>> when the
>>>>>>>>> context gets
>>>>>>>>> injected,
>>>>>>>>> after the
>>>>>>>>> sleeps,
>>>>>>>>> there's an OOM
>>>>>>>>>
>>>>>>>>> So I hope
>>>>>>>>> you'll have
>>>>>>>>> more success
>>>>>>>>> with your
>>>>>>>>> alternative :)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have done
>>>>>>>>> another test,
>>>>>>>>> remember my
>>>>>>>>> code that
>>>>>>>>> worked, which
>>>>>>>>> previously
>>>>>>>>> "warmed" the
>>>>>>>>> Thread with an
>>>>>>>>> useless request.
>>>>>>>>>
>>>>>>>>> private
>>>>>>>>> Runnable
>>>>>>>>> uploadSingleDocumentRunnable
>>>>>>>>> = new Runnable() {
>>>>>>>>> @Override
>>>>>>>>> public
>>>>>>>>> void run() {
>>>>>>>>> try {
>>>>>>>>>
>>>>>>>>> getUselessSessionCode();
>>>>>>>>>
>>>>>>>>> Thread.sleep(X);
>>>>>>>>>
>>>>>>>>> uploadSingleDocument();
>>>>>>>>> } catch
>>>>>>>>> ( Exception e ) {
>>>>>>>>> throw
>>>>>>>>> new
>>>>>>>>> RuntimeException("file
>>>>>>>>> upload failed",e);
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>> I have put a
>>>>>>>>> sleep of X
>>>>>>>>> between the
>>>>>>>>> useless warmup
>>>>>>>>> request, and
>>>>>>>>> the real
>>>>>>>>> upload request
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> What I noticed
>>>>>>>>> is that there
>>>>>>>>> is a very
>>>>>>>>> different
>>>>>>>>> behavior
>>>>>>>>> according to
>>>>>>>>> the value of X
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Under 10
>>>>>>>>> seconds, it
>>>>>>>>> seems the
>>>>>>>>> stuff is still
>>>>>>>>> warm, I can
>>>>>>>>> upload the
>>>>>>>>> documents.
>>>>>>>>> Around 10
>>>>>>>>> seconds I get
>>>>>>>>> a stack which
>>>>>>>>> seems to be
>>>>>>>>> "connection
>>>>>>>>> closed" or
>>>>>>>>> something
>>>>>>>>> Above 10
>>>>>>>>> seconds, I get
>>>>>>>>> OOM like if
>>>>>>>>> the stuff
>>>>>>>>> wasn't warm.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The stacks I
>>>>>>>>> get for 10
>>>>>>>>> seconds looks like
>>>>>>>>>
>>>>>>>>> Caused by:
>>>>>>>>> javax.net.ssl.SSLException:
>>>>>>>>> SSLEngine is
>>>>>>>>> CLOSED
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrap(SSLConnectionContext.java:295)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrapAll(SSLConnectionContext.java:238)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.wrapAll(SSLBaseFilter.java:405)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleWrite(SSLBaseFilter.java:320)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.accurateWrite(SSLFilter.java:255)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.handleWrite(SSLFilter.java:143)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleWrite(GrizzlyAsyncHttpProvider.java:2500)
>>>>>>>>> ~[async-http-client-1.7.20-SNAPSHOT.jar:na]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:853)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:720)
>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>> at
>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.flushQueue(FeedableBodyGenerator.java:133)
>>>>>>>>> ~[async-http-client-1.7.20-SNAPSHOT.jar:na]
>>>>>>>>> at
>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.feed(FeedableBodyGenerator.java:95)
>>>>>>>>> ~[async-http-client-1.7.20-SNAPSHOT.jar:na]
>>>>>>>>>
>>>>>>>>> I think I got
>>>>>>>>> some other
>>>>>>>>> different
>>>>>>>>> stacks saying
>>>>>>>>> Connection
>>>>>>>>> Closed
>>>>>>>>> Remotely or
>>>>>>>>> something like
>>>>>>>>> that.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> So it seems
>>>>>>>>> that something
>>>>>>>>> is bound to my
>>>>>>>>> thread, and it
>>>>>>>>> stays bound to
>>>>>>>>> it for about
>>>>>>>>> 10 seconds, do
>>>>>>>>> you have any
>>>>>>>>> idea what it
>>>>>>>>> could be?
>>>>>>>>> (My connection
>>>>>>>>> timeout
>>>>>>>>> setting seems
>>>>>>>>> to have no
>>>>>>>>> effect on this
>>>>>>>>> 10s threshold)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2013/9/5 Ryan
>>>>>>>>> Lubke
>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>
>>>>>>>>> That is
>>>>>>>>> one
>>>>>>>>> solution.
>>>>>>>>> I'm
>>>>>>>>> working
>>>>>>>>> out an
>>>>>>>>> alternative right
>>>>>>>>> now. Stay
>>>>>>>>> tuned!
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Anyway
>>>>>>>>>> it's not
>>>>>>>>>> a
>>>>>>>>>> problem,
>>>>>>>>>> I think
>>>>>>>>>> the
>>>>>>>>>> FeedableBodyGenerator.feed()
>>>>>>>>>> method
>>>>>>>>>> just has
>>>>>>>>>> to block
>>>>>>>>>> until a
>>>>>>>>>> context
>>>>>>>>>> has been
>>>>>>>>>> (and
>>>>>>>>>> ThreadCache
>>>>>>>>>> initialized)
>>>>>>>>>> to avoid
>>>>>>>>>> OOM errors
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/9/5
>>>>>>>>>> Sébastien
>>>>>>>>>> Lorber
>>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> What
>>>>>>>>>> is
>>>>>>>>>> very
>>>>>>>>>> strange
>>>>>>>>>> is
>>>>>>>>>> that
>>>>>>>>>> I
>>>>>>>>>> tested with/without
>>>>>>>>>> the
>>>>>>>>>> same
>>>>>>>>>> sessionCode
>>>>>>>>>> with
>>>>>>>>>> our
>>>>>>>>>> previous
>>>>>>>>>> code,
>>>>>>>>>> the
>>>>>>>>>> one
>>>>>>>>>> not
>>>>>>>>>> using
>>>>>>>>>> FeedableBodyGenerator,
>>>>>>>>>> which
>>>>>>>>>> has a
>>>>>>>>>> high
>>>>>>>>>> memory consumption.
>>>>>>>>>> Despites
>>>>>>>>>> the
>>>>>>>>>> fact
>>>>>>>>>> it
>>>>>>>>>> had
>>>>>>>>>> high
>>>>>>>>>> memory consumption,
>>>>>>>>>> it
>>>>>>>>>> seems
>>>>>>>>>> work
>>>>>>>>>> fine
>>>>>>>>>> to
>>>>>>>>>> upload multiple
>>>>>>>>>> documents
>>>>>>>>>> if
>>>>>>>>>> allocated
>>>>>>>>>> with
>>>>>>>>>> a
>>>>>>>>>> large
>>>>>>>>>> heap,
>>>>>>>>>> and
>>>>>>>>>> the
>>>>>>>>>> sessionCode
>>>>>>>>>> seems
>>>>>>>>>> to
>>>>>>>>>> have
>>>>>>>>>> no
>>>>>>>>>> effect.
>>>>>>>>>>
>>>>>>>>>> On
>>>>>>>>>> the
>>>>>>>>>> new
>>>>>>>>>> impl
>>>>>>>>>> using
>>>>>>>>>> the
>>>>>>>>>> FeedableBodyGenerator,
>>>>>>>>>> the
>>>>>>>>>> sessionCode
>>>>>>>>>> sent
>>>>>>>>>> as a
>>>>>>>>>> multipart
>>>>>>>>>> bodypart
>>>>>>>>>> seems
>>>>>>>>>> to
>>>>>>>>>> have
>>>>>>>>>> an
>>>>>>>>>> effect.
>>>>>>>>>>
>>>>>>>>>> I
>>>>>>>>>> have
>>>>>>>>>> tried
>>>>>>>>>> to
>>>>>>>>>> feed
>>>>>>>>>> the
>>>>>>>>>> queue
>>>>>>>>>> before sending
>>>>>>>>>> the
>>>>>>>>>> request
>>>>>>>>>> to
>>>>>>>>>> AHC,
>>>>>>>>>> but
>>>>>>>>>> this
>>>>>>>>>> leads
>>>>>>>>>> to
>>>>>>>>>> this
>>>>>>>>>> exception
>>>>>>>>>> (with/without
>>>>>>>>>> sessionCode
>>>>>>>>>> switching)
>>>>>>>>>> Caused by:
>>>>>>>>>> java.util.concurrent.TimeoutException:
>>>>>>>>>> Timeout
>>>>>>>>>> exceeded
>>>>>>>>>> at
>>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider.timeout(GrizzlyAsyncHttpProvider.java:528)
>>>>>>>>>> at
>>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$3.onTimeout(GrizzlyAsyncHttpProvider.java:361)
>>>>>>>>>> at
>>>>>>>>>> org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:383)
>>>>>>>>>> at
>>>>>>>>>> org.glassfish.grizzly.utils.IdleTimeoutFilter$DefaultWorker.doWork(IdleTimeoutFilter.java:362)
>>>>>>>>>> at
>>>>>>>>>> org.glassfish.grizzly.utils.DelayedExecutor$DelayedRunnable.run(DelayedExecutor.java:158)
>>>>>>>>>> at
>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>>>>>>>> at
>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/9/5
>>>>>>>>>> Sébastien
>>>>>>>>>> Lorber <lorber.sebastien_at_gmail.com
>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>
>>>>>>>>>> By the
>>>>>>>>>> way,
>>>>>>>>>> by using
>>>>>>>>>> a
>>>>>>>>>> low
>>>>>>>>>> timeout
>>>>>>>>>> with
>>>>>>>>>> the
>>>>>>>>>> same
>>>>>>>>>> sessioncode,
>>>>>>>>>> I
>>>>>>>>>> got
>>>>>>>>>> the
>>>>>>>>>> following
>>>>>>>>>> NPE:
>>>>>>>>>>
>>>>>>>>>> Caused
>>>>>>>>>> by:
>>>>>>>>>> java.lang.NullPointerException
>>>>>>>>>> at com.ning.http.client.providers.grizzly.FeedableBodyGenerator.block(FeedableBodyGenerator.java:184)
>>>>>>>>>> at com.ning.http.client.providers.grizzly.FeedableBodyGenerator.blockUntilQueueFree(FeedableBodyGenerator.java:167)
>>>>>>>>>> at com.ning.http.client.providers.grizzly.FeedableBodyGenerator.flushQueue(FeedableBodyGenerator.java:124)
>>>>>>>>>> at com.ning.http.client.providers.grizzly.FeedableBodyGenerator.feed(FeedableBodyGenerator.java:94)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> GrizzlyAsyncHttpProvider.HttpTransactionContext
>>>>>>>>>> httpCtx
>>>>>>>>>> =
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> getHttpTransactionContext(c);
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> httpCtx.abort(e.getCause());
>>>>>>>>>>
>>>>>>>>>> I
>>>>>>>>>> guess
>>>>>>>>>> the
>>>>>>>>>> httpCtx
>>>>>>>>>> is not
>>>>>>>>>> already
>>>>>>>>>> available
>>>>>>>>>> to be
>>>>>>>>>> aborted
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/9/5
>>>>>>>>>> Sébastien
>>>>>>>>>> Lorber
>>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This
>>>>>>>>>> is
>>>>>>>>>> right,
>>>>>>>>>> here's
>>>>>>>>>> a log
>>>>>>>>>> I have
>>>>>>>>>> when
>>>>>>>>>> I use
>>>>>>>>>> the
>>>>>>>>>> same
>>>>>>>>>> session
>>>>>>>>>> code,
>>>>>>>>>> ie
>>>>>>>>>> the
>>>>>>>>>> remote
>>>>>>>>>> host
>>>>>>>>>> is
>>>>>>>>>> blocking
>>>>>>>>>> the
>>>>>>>>>> data
>>>>>>>>>> or
>>>>>>>>>> something.
>>>>>>>>>> This
>>>>>>>>>> is
>>>>>>>>>> obtained
>>>>>>>>>> by
>>>>>>>>>> running
>>>>>>>>>> 5 parallel
>>>>>>>>>> uploads.
>>>>>>>>>>
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 0 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 97
>>>>>>>>>> with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 100
>>>>>>>>>> with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 160
>>>>>>>>>> with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true*
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 0 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> java.lang.OutOfMemoryError:
>>>>>>>>>> GC
>>>>>>>>>> overhead
>>>>>>>>>> limit
>>>>>>>>>> exceeded
>>>>>>>>>> Dumping
>>>>>>>>>> heap
>>>>>>>>>> to
>>>>>>>>>> /ome/lorber/ureau/om
>>>>>>>>>> ...
>>>>>>>>>> Unable
>>>>>>>>>> to
>>>>>>>>>> create
>>>>>>>>>> /ome/lorber/ureau/om:
>>>>>>>>>> Le
>>>>>>>>>> fichier
>>>>>>>>>> existe
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Disconnected
>>>>>>>>>> from
>>>>>>>>>> the
>>>>>>>>>> target
>>>>>>>>>> VM,
>>>>>>>>>> address:
>>>>>>>>>> '127.0.0.1:49268
>>>>>>>>>> <http://127.0.0.1:49268>',
>>>>>>>>>> transport:
>>>>>>>>>> 'socket'
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Otherwise,
>>>>>>>>>> with
>>>>>>>>>> different
>>>>>>>>>> session
>>>>>>>>>> codes,
>>>>>>>>>> I get
>>>>>>>>>> the
>>>>>>>>>> following:
>>>>>>>>>>
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 0 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 0 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 0 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> *Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 0 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = false*
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> Flushing
>>>>>>>>>> queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1 with
>>>>>>>>>> allowBlocking
>>>>>>>>>> = true
>>>>>>>>>> ...
>>>>>>>>>> and
>>>>>>>>>> this
>>>>>>>>>> continues
>>>>>>>>>> without
>>>>>>>>>> OOM
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> So,
>>>>>>>>>> this
>>>>>>>>>> seems
>>>>>>>>>> to
>>>>>>>>>> be
>>>>>>>>>> the
>>>>>>>>>> problem.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I think
>>>>>>>>>> it
>>>>>>>>>> would
>>>>>>>>>> be
>>>>>>>>>> great
>>>>>>>>>> to
>>>>>>>>>> be
>>>>>>>>>> able
>>>>>>>>>> to
>>>>>>>>>> be
>>>>>>>>>> able
>>>>>>>>>> to
>>>>>>>>>> choose
>>>>>>>>>> the
>>>>>>>>>> queue
>>>>>>>>>> impl
>>>>>>>>>> behind
>>>>>>>>>> that FeedableBodyGenerator,
>>>>>>>>>> like
>>>>>>>>>> I suggested
>>>>>>>>>> in
>>>>>>>>>> my
>>>>>>>>>> pull
>>>>>>>>>> request.
>>>>>>>>>>
>>>>>>>>>> See
>>>>>>>>>> here:
>>>>>>>>>> https://github.com/slorber/async-http-client/blob/79b0c3b28a61b0aa4c4b055bca8f6be11d9ed1e6/src/main/java/com/ning/http/client/providers/grizzly/FeedableBodyGenerator.java
>>>>>>>>>>
>>>>>>>>>> Using
>>>>>>>>>> a LinkedBlockingQueue
>>>>>>>>>> seems
>>>>>>>>>> to
>>>>>>>>>> be
>>>>>>>>>> a nice
>>>>>>>>>> idea
>>>>>>>>>> in
>>>>>>>>>> this
>>>>>>>>>> context,
>>>>>>>>>> and
>>>>>>>>>> in
>>>>>>>>>> my
>>>>>>>>>> case
>>>>>>>>>> I would
>>>>>>>>>> probably
>>>>>>>>>> use
>>>>>>>>>> a queue
>>>>>>>>>> of
>>>>>>>>>> size
>>>>>>>>>> 1
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This
>>>>>>>>>> would
>>>>>>>>>> handle
>>>>>>>>>> the
>>>>>>>>>> blocking
>>>>>>>>>> of
>>>>>>>>>> the
>>>>>>>>>> feed
>>>>>>>>>> method,
>>>>>>>>>> without
>>>>>>>>>> having
>>>>>>>>>> to
>>>>>>>>>> use
>>>>>>>>>> this:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> if
>>>>>>>>>> (context
>>>>>>>>>> !=
>>>>>>>>>> null)
>>>>>>>>>> {
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> flushQueue(true);
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Or
>>>>>>>>>> perhaps
>>>>>>>>>> the
>>>>>>>>>> feed()
>>>>>>>>>> method
>>>>>>>>>> have
>>>>>>>>>> to
>>>>>>>>>> wait
>>>>>>>>>> until
>>>>>>>>>> a context
>>>>>>>>>> is
>>>>>>>>>> set
>>>>>>>>>> in
>>>>>>>>>> the
>>>>>>>>>> BodyGenerator
>>>>>>>>>> ?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I think
>>>>>>>>>> it
>>>>>>>>>> would
>>>>>>>>>> be
>>>>>>>>>> more
>>>>>>>>>> clear
>>>>>>>>>> if
>>>>>>>>>> the initializeAsynchronousTransfer
>>>>>>>>>> simply
>>>>>>>>>> didn't
>>>>>>>>>> flush
>>>>>>>>>> the
>>>>>>>>>> queue
>>>>>>>>>> but
>>>>>>>>>> just
>>>>>>>>>> setup
>>>>>>>>>> the
>>>>>>>>>> context.
>>>>>>>>>> Then
>>>>>>>>>> the
>>>>>>>>>> feed
>>>>>>>>>> method
>>>>>>>>>> would
>>>>>>>>>> block
>>>>>>>>>> until
>>>>>>>>>> there's
>>>>>>>>>> a context
>>>>>>>>>> set,
>>>>>>>>>> and
>>>>>>>>>> then
>>>>>>>>>> flush
>>>>>>>>>> the
>>>>>>>>>> queue
>>>>>>>>>> with
>>>>>>>>>> blocking
>>>>>>>>>> behavior.
>>>>>>>>>>
>>>>>>>>>> This
>>>>>>>>>> is
>>>>>>>>>> probably
>>>>>>>>>> the
>>>>>>>>>> next
>>>>>>>>>> step,
>>>>>>>>>> but
>>>>>>>>>> as
>>>>>>>>>> we
>>>>>>>>>> are
>>>>>>>>>> using
>>>>>>>>>> AHC
>>>>>>>>>> for
>>>>>>>>>> async,
>>>>>>>>>> it
>>>>>>>>>> would
>>>>>>>>>> probably
>>>>>>>>>> be
>>>>>>>>>> great
>>>>>>>>>> if
>>>>>>>>>> that
>>>>>>>>>> blocking
>>>>>>>>>> feed()
>>>>>>>>>> method
>>>>>>>>>> was
>>>>>>>>>> called
>>>>>>>>>> in
>>>>>>>>>> a worker
>>>>>>>>>> thread
>>>>>>>>>> instead
>>>>>>>>>> of
>>>>>>>>>> our
>>>>>>>>>> main
>>>>>>>>>> thread.
>>>>>>>>>>
>>>>>>>>>> I won't
>>>>>>>>>> use
>>>>>>>>>> this
>>>>>>>>>> but
>>>>>>>>>> someone
>>>>>>>>>> who
>>>>>>>>>> really
>>>>>>>>>> wants
>>>>>>>>>> a non-blocking
>>>>>>>>>> impl
>>>>>>>>>> of
>>>>>>>>>> performant
>>>>>>>>>> multipart
>>>>>>>>>> fileupload
>>>>>>>>>> would
>>>>>>>>>> probably
>>>>>>>>>> need
>>>>>>>>>> this,
>>>>>>>>>> or
>>>>>>>>>> will
>>>>>>>>>> use
>>>>>>>>>> an
>>>>>>>>>> ExecutorService
>>>>>>>>>> for
>>>>>>>>>> the
>>>>>>>>>> feeding
>>>>>>>>>> operations
>>>>>>>>>> as
>>>>>>>>>> a workaround.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> again
>>>>>>>>>> for
>>>>>>>>>> your
>>>>>>>>>> reactivity
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/9/4
>>>>>>>>>> Ryan
>>>>>>>>>> Lubke
>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Sébastien
>>>>>>>>>> Lorber
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> I've
>>>>>>>>>>> integrated
>>>>>>>>>>> this
>>>>>>>>>>> change
>>>>>>>>>>> and
>>>>>>>>>>> it
>>>>>>>>>>> works
>>>>>>>>>>> fine
>>>>>>>>>>> except
>>>>>>>>>>> a little
>>>>>>>>>>> detail.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I'm
>>>>>>>>>>> uploading
>>>>>>>>>>> files
>>>>>>>>>>> to
>>>>>>>>>>> a third
>>>>>>>>>>> party
>>>>>>>>>>> API
>>>>>>>>>>> (a
>>>>>>>>>>> bit
>>>>>>>>>>> like
>>>>>>>>>>> S3).
>>>>>>>>>>> This
>>>>>>>>>>> API
>>>>>>>>>>> requires
>>>>>>>>>>> a "sessionCode"
>>>>>>>>>>> in
>>>>>>>>>>> each
>>>>>>>>>>> request.
>>>>>>>>>>> So
>>>>>>>>>>> there
>>>>>>>>>>> is
>>>>>>>>>>> a multipart
>>>>>>>>>>> StringPart
>>>>>>>>>>> with
>>>>>>>>>>> that
>>>>>>>>>>> SessionCode.
>>>>>>>>>>>
>>>>>>>>>>> We
>>>>>>>>>>> used
>>>>>>>>>>> to
>>>>>>>>>>> have
>>>>>>>>>>> a cache
>>>>>>>>>>> which
>>>>>>>>>>> holds
>>>>>>>>>>> the
>>>>>>>>>>> sessionCode
>>>>>>>>>>> 30min
>>>>>>>>>>> per
>>>>>>>>>>> user
>>>>>>>>>>> so
>>>>>>>>>>> that
>>>>>>>>>>> we
>>>>>>>>>>> do
>>>>>>>>>>> not
>>>>>>>>>>> need
>>>>>>>>>>> to
>>>>>>>>>>> init
>>>>>>>>>>> a new
>>>>>>>>>>> session
>>>>>>>>>>> each
>>>>>>>>>>> time.
>>>>>>>>>>>
>>>>>>>>>>> I had
>>>>>>>>>>> troubles
>>>>>>>>>>> in
>>>>>>>>>>> this
>>>>>>>>>>> very
>>>>>>>>>>> specific
>>>>>>>>>>> case:
>>>>>>>>>>> when
>>>>>>>>>>> I upload
>>>>>>>>>>> 5 docs
>>>>>>>>>>> with
>>>>>>>>>>> the
>>>>>>>>>>> same
>>>>>>>>>>> session
>>>>>>>>>>> code.
>>>>>>>>>>> When
>>>>>>>>>>> I remove
>>>>>>>>>>> the
>>>>>>>>>>> cache
>>>>>>>>>>> and
>>>>>>>>>>> use
>>>>>>>>>>> 5 different
>>>>>>>>>>> session
>>>>>>>>>>> codes,
>>>>>>>>>>> it
>>>>>>>>>>> works
>>>>>>>>>>> fine.
>>>>>>>>>>>
>>>>>>>>>>> I guess
>>>>>>>>>>> the
>>>>>>>>>>> remote
>>>>>>>>>>> service
>>>>>>>>>>> is
>>>>>>>>>>> blocking
>>>>>>>>>>> concurrent
>>>>>>>>>>> uploads
>>>>>>>>>>> with
>>>>>>>>>>> the
>>>>>>>>>>> same
>>>>>>>>>>> session
>>>>>>>>>>> code.
>>>>>>>>>>> I don't
>>>>>>>>>>> know
>>>>>>>>>>> at
>>>>>>>>>>> all.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Where
>>>>>>>>>>> I want
>>>>>>>>>>> to
>>>>>>>>>>> go
>>>>>>>>>>> is
>>>>>>>>>>> that
>>>>>>>>>>> I wouldn't
>>>>>>>>>>> have
>>>>>>>>>>> expected
>>>>>>>>>>> Grizzly
>>>>>>>>>>> to
>>>>>>>>>>> OOM
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Avertissement:
>>>>>>>>>>> Exception
>>>>>>>>>>> during
>>>>>>>>>>> FilterChain
>>>>>>>>>>> execution
>>>>>>>>>>> java.lang.OutOfMemoryError:
>>>>>>>>>>> Java
>>>>>>>>>>> heap
>>>>>>>>>>> space
>>>>>>>>>>> at
>>>>>>>>>>> java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>>>>>>>>>>> at
>>>>>>>>>>> java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLUtils.allocateOutputBuffer(SSLUtils.java:342)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter$2.grow(SSLBaseFilter.java:117)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.ensureBufferSize(SSLConnectionContext.java:392)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrap(SSLConnectionContext.java:272)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrapAll(SSLConnectionContext.java:238)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.wrapAll(SSLBaseFilter.java:405)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleWrite(SSLBaseFilter.java:320)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.accurateWrite(SSLFilter.java:263)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.handleWrite(SSLFilter.java:143)
>>>>>>>>>>> at
>>>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleWrite(GrizzlyAsyncHttpProvider.java:2500)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.FilterChainContext$1.run(FilterChainContext.java:196)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.FilterChainContext.resume(FilterChainContext.java:220)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter$SSLHandshakeContext.completed(SSLFilter.java:383)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.notifyHandshakeComplete(SSLFilter.java:278)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleRead(SSLBaseFilter.java:275)
>>>>>>>>>>> at
>>>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleRead(GrizzlyAsyncHttpProvider.java:2490)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:546)
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Caused
>>>>>>>>>>> by:
>>>>>>>>>>> java.util.concurrent.TimeoutException:
>>>>>>>>>>> null
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.impl.SafeFutureImpl$Sync.innerGet(SafeFutureImpl.java:367)
>>>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>>>> at
>>>>>>>>>>> org.glassfish.grizzly.impl.SafeFutureImpl.get(SafeFutureImpl.java:274)
>>>>>>>>>>> ~[grizzly-framework-2.3.5.jar:2.3.5]
>>>>>>>>>>> at
>>>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.block(FeedableBodyGenerator.java:177)
>>>>>>>>>>> ~[async-http-client-1.7.20-204092c.jar:na]
>>>>>>>>>>> at
>>>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.blockUntilQueueFree(FeedableBodyGenerator.java:167)
>>>>>>>>>>> ~[async-http-client-1.7.20-204092c.jar:na]
>>>>>>>>>>> at
>>>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.flushQueue(FeedableBodyGenerator.java:124)
>>>>>>>>>>> ~[async-http-client-1.7.20-204092c.jar:na]
>>>>>>>>>>> at
>>>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.feed(FeedableBodyGenerator.java:94)
>>>>>>>>>>> ~[async-http-client-1.7.20-204092c.jar:na]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> multipart.body.generator.feeder.buffer=100000
>>>>>>>>>>> ->
>>>>>>>>>>> size
>>>>>>>>>>> of
>>>>>>>>>>> each
>>>>>>>>>>> Buffer
>>>>>>>>>>> sent
>>>>>>>>>>> to
>>>>>>>>>>> the
>>>>>>>>>>> FeedableBodyGenerator
>>>>>>>>>>> transport.max.pending.bytes=1000000
>>>>>>>>>>> (I
>>>>>>>>>>> tried
>>>>>>>>>>> other
>>>>>>>>>>> settings,
>>>>>>>>>>> including
>>>>>>>>>>> AUTO_SIZE)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Do
>>>>>>>>>>> you
>>>>>>>>>>> have
>>>>>>>>>>> any
>>>>>>>>>>> idea
>>>>>>>>>>> why
>>>>>>>>>>> is
>>>>>>>>>>> there
>>>>>>>>>>> an
>>>>>>>>>>> OOM
>>>>>>>>>>> with
>>>>>>>>>>> these
>>>>>>>>>>> settings?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Perhaps
>>>>>>>>>>> it
>>>>>>>>>>> is
>>>>>>>>>>> because
>>>>>>>>>>> the
>>>>>>>>>>> feed()
>>>>>>>>>>> method
>>>>>>>>>>> of
>>>>>>>>>>> FeedableBodyGenerator
>>>>>>>>>>> doesn't
>>>>>>>>>>> block
>>>>>>>>>>> until
>>>>>>>>>>> the
>>>>>>>>>>> context
>>>>>>>>>>> is
>>>>>>>>>>> initialized.
>>>>>>>>>>>
>>>>>>>>>>> I guess
>>>>>>>>>>> the initializeAsynchronousTransfer
>>>>>>>>>>> is
>>>>>>>>>>> called
>>>>>>>>>>> only
>>>>>>>>>>> once
>>>>>>>>>>> the
>>>>>>>>>>> connection
>>>>>>>>>>> is
>>>>>>>>>>> established,
>>>>>>>>>>> and
>>>>>>>>>>> perhaps
>>>>>>>>>>> a lot
>>>>>>>>>>> of
>>>>>>>>>>> Buffer
>>>>>>>>>>> are
>>>>>>>>>>> added
>>>>>>>>>>> to
>>>>>>>>>>> the
>>>>>>>>>>> queue...
>>>>>>>>>> Yes,
>>>>>>>>>> it's
>>>>>>>>>> invoked
>>>>>>>>>> once
>>>>>>>>>> the
>>>>>>>>>> request
>>>>>>>>>> has
>>>>>>>>>> been
>>>>>>>>>> dispatched,
>>>>>>>>>> so
>>>>>>>>>> if
>>>>>>>>>> the
>>>>>>>>>> generator
>>>>>>>>>> is
>>>>>>>>>> fed
>>>>>>>>>> a lot
>>>>>>>>>> before
>>>>>>>>>> the
>>>>>>>>>> request,
>>>>>>>>>> I could
>>>>>>>>>> see
>>>>>>>>>> this
>>>>>>>>>> happening.
>>>>>>>>>> I'll
>>>>>>>>>> see
>>>>>>>>>> what
>>>>>>>>>> I can
>>>>>>>>>> do
>>>>>>>>>> to
>>>>>>>>>> alleviate
>>>>>>>>>> that
>>>>>>>>>> case.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> But
>>>>>>>>>>> I'm
>>>>>>>>>>> not
>>>>>>>>>>> sure
>>>>>>>>>>> at
>>>>>>>>>>> all
>>>>>>>>>>> because
>>>>>>>>>>> the
>>>>>>>>>>> session
>>>>>>>>>>> code
>>>>>>>>>>> is
>>>>>>>>>>> transmitted
>>>>>>>>>>> as
>>>>>>>>>>> a BodyPart
>>>>>>>>>>> and
>>>>>>>>>>> I get
>>>>>>>>>>> the
>>>>>>>>>>> same
>>>>>>>>>>> problem
>>>>>>>>>>> if
>>>>>>>>>>> i put
>>>>>>>>>>> it
>>>>>>>>>>> as
>>>>>>>>>>> the
>>>>>>>>>>> first
>>>>>>>>>>> or
>>>>>>>>>>> last
>>>>>>>>>>> multipart.
>>>>>>>>>>>
>>>>>>>>>>> It's
>>>>>>>>>>> not
>>>>>>>>>>> a big
>>>>>>>>>>> deal,
>>>>>>>>>>> perhaps
>>>>>>>>>>> I should
>>>>>>>>>>> always
>>>>>>>>>>> use
>>>>>>>>>>> a different
>>>>>>>>>>> session
>>>>>>>>>>> code
>>>>>>>>>>> for
>>>>>>>>>>> concurrent
>>>>>>>>>>> operations
>>>>>>>>>>> but
>>>>>>>>>>> I'd
>>>>>>>>>>> like
>>>>>>>>>>> to
>>>>>>>>>>> be
>>>>>>>>>>> sure
>>>>>>>>>>> that
>>>>>>>>>>> we
>>>>>>>>>>> won't
>>>>>>>>>>> have
>>>>>>>>>>> this
>>>>>>>>>>> issue
>>>>>>>>>>> in
>>>>>>>>>>> production...
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> 2013/9/3
>>>>>>>>>>> Ryan
>>>>>>>>>>> Lubke
>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>
>>>>>>>>>>> Good
>>>>>>>>>>> catch.
>>>>>>>>>>> Fixed.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Sébastien
>>>>>>>>>>> Lorber
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Hello,
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> There's
>>>>>>>>>>>> a little
>>>>>>>>>>>> mistake
>>>>>>>>>>>> in
>>>>>>>>>>>> the
>>>>>>>>>>>> grizzly
>>>>>>>>>>>> ahc
>>>>>>>>>>>> provider
>>>>>>>>>>>> relative
>>>>>>>>>>>> to
>>>>>>>>>>>> the
>>>>>>>>>>>> write
>>>>>>>>>>>> queue
>>>>>>>>>>>> size.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> https://github.com/AsyncHttpClient/async-http-client/blob/b5d97efe9fe14113ea92fb1f7db192a2d090fad7/src/main/java/com/ning/http/client/providers/grizzly/GrizzlyAsyncHttpProvider.java#L419
>>>>>>>>>>>>
>>>>>>>>>>>> As
>>>>>>>>>>>> you
>>>>>>>>>>>> can
>>>>>>>>>>>> see,
>>>>>>>>>>>> the
>>>>>>>>>>>> TransportCustomizer
>>>>>>>>>>>> is
>>>>>>>>>>>> called,
>>>>>>>>>>>> and
>>>>>>>>>>>> then
>>>>>>>>>>>> the
>>>>>>>>>>>> write
>>>>>>>>>>>> queue
>>>>>>>>>>>> size
>>>>>>>>>>>> (among
>>>>>>>>>>>> other
>>>>>>>>>>>> things)
>>>>>>>>>>>> is
>>>>>>>>>>>> set
>>>>>>>>>>>> to
>>>>>>>>>>>> AUTO_SIZE
>>>>>>>>>>>> (instead
>>>>>>>>>>>> of
>>>>>>>>>>>> previously
>>>>>>>>>>>> UNLIMITED)
>>>>>>>>>>>>
>>>>>>>>>>>> clientTransport.getAsyncQueueIO().getWriter()
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> .setMaxPendingBytesPerConnection(AsyncQueueWriter.AUTO_SIZE);
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I think
>>>>>>>>>>>> the
>>>>>>>>>>>> default
>>>>>>>>>>>> settings
>>>>>>>>>>>> like
>>>>>>>>>>>> this
>>>>>>>>>>>> AUTO_SIZE
>>>>>>>>>>>> attribute
>>>>>>>>>>>> should
>>>>>>>>>>>> be
>>>>>>>>>>>> set
>>>>>>>>>>>> before
>>>>>>>>>>>> the
>>>>>>>>>>>> customization
>>>>>>>>>>>> of
>>>>>>>>>>>> the
>>>>>>>>>>>> transport,
>>>>>>>>>>>> or
>>>>>>>>>>>> they
>>>>>>>>>>>> would
>>>>>>>>>>>> override
>>>>>>>>>>>> the
>>>>>>>>>>>> value
>>>>>>>>>>>> we
>>>>>>>>>>>> customized.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This
>>>>>>>>>>>> is
>>>>>>>>>>>> actually
>>>>>>>>>>>> my
>>>>>>>>>>>> case,
>>>>>>>>>>>> since
>>>>>>>>>>>> I can't
>>>>>>>>>>>> reproduce
>>>>>>>>>>>> my
>>>>>>>>>>>> "bug"
>>>>>>>>>>>> which
>>>>>>>>>>>> is
>>>>>>>>>>>> "high
>>>>>>>>>>>> memory
>>>>>>>>>>>> consumption",
>>>>>>>>>>>> even
>>>>>>>>>>>> when
>>>>>>>>>>>> using
>>>>>>>>>>>> -1
>>>>>>>>>>>> / UNLIMITED
>>>>>>>>>>>> in
>>>>>>>>>>>> the
>>>>>>>>>>>> TransportCustomizer.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This
>>>>>>>>>>>> could
>>>>>>>>>>>> work
>>>>>>>>>>>> fine
>>>>>>>>>>>> for
>>>>>>>>>>>> me
>>>>>>>>>>>> with
>>>>>>>>>>>> AUTO_SIZE,
>>>>>>>>>>>> but
>>>>>>>>>>>> I'd
>>>>>>>>>>>> rather
>>>>>>>>>>>> be
>>>>>>>>>>>> able
>>>>>>>>>>>> to
>>>>>>>>>>>> tune
>>>>>>>>>>>> this
>>>>>>>>>>>> parameter
>>>>>>>>>>>> during
>>>>>>>>>>>> load
>>>>>>>>>>>> tests
>>>>>>>>>>>> to
>>>>>>>>>>>> see
>>>>>>>>>>>> the
>>>>>>>>>>>> effect.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 2013/8/31
>>>>>>>>>>>> Sebastien
>>>>>>>>>>>> Lorber
>>>>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> i will
>>>>>>>>>>>> ckeck
>>>>>>>>>>>> that
>>>>>>>>>>>> on
>>>>>>>>>>>> monday.
>>>>>>>>>>>> I can
>>>>>>>>>>>> now
>>>>>>>>>>>> upload
>>>>>>>>>>>> a 500m
>>>>>>>>>>>> file
>>>>>>>>>>>> with
>>>>>>>>>>>> 40m
>>>>>>>>>>>> heap
>>>>>>>>>>>> size
>>>>>>>>>>>> ;)
>>>>>>>>>>>>
>>>>>>>>>>>> Envoyé
>>>>>>>>>>>> de
>>>>>>>>>>>> mon
>>>>>>>>>>>> iPhone
>>>>>>>>>>>>
>>>>>>>>>>>> Le
>>>>>>>>>>>> 30
>>>>>>>>>>>> août
>>>>>>>>>>>> 2013
>>>>>>>>>>>> à 20:49,
>>>>>>>>>>>> Ryan
>>>>>>>>>>>> Lubke
>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>> a écrit :
>>>>>>>>>>>>
>>>>>>>>>>>>> I'm
>>>>>>>>>>>>> going
>>>>>>>>>>>>> to
>>>>>>>>>>>>> be
>>>>>>>>>>>>> updating
>>>>>>>>>>>>> the
>>>>>>>>>>>>> Grizzly
>>>>>>>>>>>>> provider
>>>>>>>>>>>>> such
>>>>>>>>>>>>> that
>>>>>>>>>>>>> AUTO_SIZE
>>>>>>>>>>>>> (not
>>>>>>>>>>>>> AUTO_TUNE)
>>>>>>>>>>>>> is
>>>>>>>>>>>>> the
>>>>>>>>>>>>> default,
>>>>>>>>>>>>> so
>>>>>>>>>>>>> you
>>>>>>>>>>>>> can
>>>>>>>>>>>>> avoid
>>>>>>>>>>>>> the
>>>>>>>>>>>>> use
>>>>>>>>>>>>> of
>>>>>>>>>>>>> the
>>>>>>>>>>>>> TransportCustomizer.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regarding
>>>>>>>>>>>>>> your
>>>>>>>>>>>>>> tuning
>>>>>>>>>>>>>> question,
>>>>>>>>>>>>>> I would
>>>>>>>>>>>>>> probably
>>>>>>>>>>>>>> set
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> value
>>>>>>>>>>>>>> to
>>>>>>>>>>>>>> AsyncQueueWriter.AUTO_TUNE
>>>>>>>>>>>>>> (this
>>>>>>>>>>>>>> will
>>>>>>>>>>>>>> be
>>>>>>>>>>>>>> four
>>>>>>>>>>>>>> times
>>>>>>>>>>>>>> the
>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>> write
>>>>>>>>>>>>>> buffer)
>>>>>>>>>>>>>> and
>>>>>>>>>>>>>> see
>>>>>>>>>>>>>> how
>>>>>>>>>>>>>> that
>>>>>>>>>>>>>> works.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> A question
>>>>>>>>>>>>>>> first.
>>>>>>>>>>>>>>> With
>>>>>>>>>>>>>>> these
>>>>>>>>>>>>>>> changes,
>>>>>>>>>>>>>>> your
>>>>>>>>>>>>>>> memory
>>>>>>>>>>>>>>> usage
>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>> more
>>>>>>>>>>>>>>> inline
>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>> what
>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>> were
>>>>>>>>>>>>>>> looking
>>>>>>>>>>>>>>> for?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> By
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> way,
>>>>>>>>>>>>>>>> do
>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>> any
>>>>>>>>>>>>>>>> idea
>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> 1.7.20
>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>> released
>>>>>>>>>>>>>>>> (with
>>>>>>>>>>>>>>>> these
>>>>>>>>>>>>>>>> new
>>>>>>>>>>>>>>>> improvements?)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We
>>>>>>>>>>>>>>>> would
>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> know
>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>> wait
>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>> a release
>>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>> install
>>>>>>>>>>>>>>>> our
>>>>>>>>>>>>>>>> own
>>>>>>>>>>>>>>>> temp
>>>>>>>>>>>>>>>> release
>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>> Nexus
>>>>>>>>>>>>>>>> :)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2013/8/30
>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thank
>>>>>>>>>>>>>>>> you,
>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>> works
>>>>>>>>>>>>>>>> fine!
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I just
>>>>>>>>>>>>>>>> had
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> modify
>>>>>>>>>>>>>>>> a single
>>>>>>>>>>>>>>>> line
>>>>>>>>>>>>>>>> after
>>>>>>>>>>>>>>>> your
>>>>>>>>>>>>>>>> commit.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider#initializeTransport
>>>>>>>>>>>>>>>> ->
>>>>>>>>>>>>>>>> clientTransport.getAsyncQueueIO().getWriter().setMaxPendingBytesPerConnection(10000);
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If
>>>>>>>>>>>>>>>> I let
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> initial
>>>>>>>>>>>>>>>> value
>>>>>>>>>>>>>>>> (-1)
>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>> won't
>>>>>>>>>>>>>>>> block,
>>>>>>>>>>>>>>>> canWrite
>>>>>>>>>>>>>>>> always
>>>>>>>>>>>>>>>> returns
>>>>>>>>>>>>>>>> true
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Btw,
>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>> I didn't
>>>>>>>>>>>>>>>> find
>>>>>>>>>>>>>>>> any
>>>>>>>>>>>>>>>> way
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> pass
>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>> value
>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>> a config
>>>>>>>>>>>>>>>> attribute,
>>>>>>>>>>>>>>>> neither
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> size
>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> write
>>>>>>>>>>>>>>>> buffer
>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>> talked
>>>>>>>>>>>>>>>> about.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So
>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> end,
>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>> there
>>>>>>>>>>>>>>>> a way
>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>> current
>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>> code
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> use
>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>> "canWrite
>>>>>>>>>>>>>>>> = false"
>>>>>>>>>>>>>>>> behavior?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> If
>>>>>>>>>>>>>>>> not,
>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>> please
>>>>>>>>>>>>>>>> provide
>>>>>>>>>>>>>>>> a way
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> set
>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>> behavior
>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>> v1.7.20
>>>>>>>>>>>>>>>> ? thanks
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> PS:
>>>>>>>>>>>>>>>> does
>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>> make
>>>>>>>>>>>>>>>> sens
>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>> use
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> same
>>>>>>>>>>>>>>>> number
>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>> bytes
>>>>>>>>>>>>>>>> un
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> feed(Buffer)
>>>>>>>>>>>>>>>> method
>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>> setMaxPendingBytesPerConnection(10000);
>>>>>>>>>>>>>>>> ? do
>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>> any
>>>>>>>>>>>>>>>> tuning
>>>>>>>>>>>>>>>> recommandation?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2013/8/29
>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Please
>>>>>>>>>>>>>>>> disregard.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Sébastien,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Could
>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>> also
>>>>>>>>>>>>>>>>> provide
>>>>>>>>>>>>>>>>> a sample
>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>> how
>>>>>>>>>>>>>>>>> you're
>>>>>>>>>>>>>>>>> performing
>>>>>>>>>>>>>>>>> your
>>>>>>>>>>>>>>>>> feed?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>> -rl
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Sébastien,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I'd
>>>>>>>>>>>>>>>>>> recommend
>>>>>>>>>>>>>>>>>> looking
>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>> Connection.canWrite()
>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>> Connection.notifyCanWrite(WriteListener)
>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> By
>>>>>>>>>>>>>>>>>> default,
>>>>>>>>>>>>>>>>>> Grizzly
>>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>> configure
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> async
>>>>>>>>>>>>>>>>>> write
>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>> length
>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>> four
>>>>>>>>>>>>>>>>>> times
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> write
>>>>>>>>>>>>>>>>>> buffer
>>>>>>>>>>>>>>>>>> size
>>>>>>>>>>>>>>>>>> (which
>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>> based
>>>>>>>>>>>>>>>>>> off
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>>>>>> write
>>>>>>>>>>>>>>>>>> buffer).
>>>>>>>>>>>>>>>>>> When
>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>> exceeds
>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>> value,
>>>>>>>>>>>>>>>>>> canWrite()
>>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>> return
>>>>>>>>>>>>>>>>>> false.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> When
>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>> occurs,
>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>> register
>>>>>>>>>>>>>>>>>> a WriteListener
>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>> notified
>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>> length
>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>> below
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> configured
>>>>>>>>>>>>>>>>>> max
>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>> simulate
>>>>>>>>>>>>>>>>>> blocking
>>>>>>>>>>>>>>>>>> until
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> onWritePossible()
>>>>>>>>>>>>>>>>>> callback
>>>>>>>>>>>>>>>>>> has
>>>>>>>>>>>>>>>>>> been
>>>>>>>>>>>>>>>>>> invoked.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ----------------------------------------------------------------
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> final
>>>>>>>>>>>>>>>>>> FutureImpl<Boolean>
>>>>>>>>>>>>>>>>>> future
>>>>>>>>>>>>>>>>>> = Futures.createSafeFuture();
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> //
>>>>>>>>>>>>>>>>>> Connection
>>>>>>>>>>>>>>>>>> may
>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>> obtained
>>>>>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>>> calling
>>>>>>>>>>>>>>>>>> FilterChainContext.getConnection().
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> connection.notifyCanWrite(new
>>>>>>>>>>>>>>>>>> WriteHandler()
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> @Override
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> public
>>>>>>>>>>>>>>>>>> void
>>>>>>>>>>>>>>>>>> onWritePossible()
>>>>>>>>>>>>>>>>>> throws
>>>>>>>>>>>>>>>>>> Exception
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> future.result(Boolean.TRUE);
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> @Override
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> public
>>>>>>>>>>>>>>>>>> void
>>>>>>>>>>>>>>>>>> onError(Throwable
>>>>>>>>>>>>>>>>>> t)
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> future.failure(Exceptions.makeIOException(t));
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> });
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> try
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> final
>>>>>>>>>>>>>>>>>> long
>>>>>>>>>>>>>>>>>> writeTimeout
>>>>>>>>>>>>>>>>>> = 30;
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> future.get(writeTimeout,
>>>>>>>>>>>>>>>>>> TimeUnit.MILLISECONDS);
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> } catch
>>>>>>>>>>>>>>>>>> (ExecutionException
>>>>>>>>>>>>>>>>>> e)
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> HttpTransactionContext
>>>>>>>>>>>>>>>>>> httpCtx
>>>>>>>>>>>>>>>>>> = HttpTransactionContext.get(connection);
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> httpCtx.abort(e.getCause());
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> } catch
>>>>>>>>>>>>>>>>>> (Exception
>>>>>>>>>>>>>>>>>> e)
>>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>>> HttpTransactionContext
>>>>>>>>>>>>>>>>>> httpCtx
>>>>>>>>>>>>>>>>>> = HttpTransactionContext.get(connection);
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> httpCtx.abort(e);
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>> http://grizzly.java.net/docs/2.3/apidocs/org/glassfish/grizzly/OutputSink.html.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Ryan,
>>>>>>>>>>>>>>>>>>> I've
>>>>>>>>>>>>>>>>>>> did
>>>>>>>>>>>>>>>>>>> some
>>>>>>>>>>>>>>>>>>> other
>>>>>>>>>>>>>>>>>>> tests.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It
>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>>>>>> a blocking
>>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> FeedableBodyGenerator
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> totally
>>>>>>>>>>>>>>>>>>> useless
>>>>>>>>>>>>>>>>>>> because
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> thread
>>>>>>>>>>>>>>>>>>> consuming
>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>> blocking
>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>>> never
>>>>>>>>>>>>>>>>>>> blocks
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> feeding,
>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>> was
>>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>> intention
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> beginning.
>>>>>>>>>>>>>>>>>>> Maybe
>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>> depends
>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> IO
>>>>>>>>>>>>>>>>>>> strategy
>>>>>>>>>>>>>>>>>>> used?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I use
>>>>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>>>>> default
>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> use SameThreadIOStrategy
>>>>>>>>>>>>>>>>>>> so
>>>>>>>>>>>>>>>>>>> I don't
>>>>>>>>>>>>>>>>>>> think
>>>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>>> related
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> IO
>>>>>>>>>>>>>>>>>>> strategy.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> So
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> end
>>>>>>>>>>>>>>>>>>> I can
>>>>>>>>>>>>>>>>>>> upload
>>>>>>>>>>>>>>>>>>> a 70m
>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>> a heap
>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>> 50m,
>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>> I have
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> put
>>>>>>>>>>>>>>>>>>> a Thread.sleep(30)
>>>>>>>>>>>>>>>>>>> between
>>>>>>>>>>>>>>>>>>> each
>>>>>>>>>>>>>>>>>>> 100k
>>>>>>>>>>>>>>>>>>> Buffer
>>>>>>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> FeedableBodyGenerator
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The
>>>>>>>>>>>>>>>>>>> connection
>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> server
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>>>>>> good
>>>>>>>>>>>>>>>>>>> here,
>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> production
>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> normally
>>>>>>>>>>>>>>>>>>> a lot
>>>>>>>>>>>>>>>>>>> better
>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>> far
>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>> I know.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I've
>>>>>>>>>>>>>>>>>>> tried
>>>>>>>>>>>>>>>>>>> things
>>>>>>>>>>>>>>>>>>> like clientTransport.getAsyncQueueIO().getWriter().setMaxPendingBytesPerConnection(100000);
>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>>>> seem
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> work
>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>> me.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I'd
>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> Grizzly
>>>>>>>>>>>>>>>>>>> internals
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> block
>>>>>>>>>>>>>>>>>>> when
>>>>>>>>>>>>>>>>>>> there
>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>> too
>>>>>>>>>>>>>>>>>>> much
>>>>>>>>>>>>>>>>>>> pending
>>>>>>>>>>>>>>>>>>> bytes
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> send.
>>>>>>>>>>>>>>>>>>> Is
>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>> possible?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> PS:
>>>>>>>>>>>>>>>>>>> I've
>>>>>>>>>>>>>>>>>>> just
>>>>>>>>>>>>>>>>>>> been
>>>>>>>>>>>>>>>>>>> able
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>>>> a 500mo
>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>> 100mo
>>>>>>>>>>>>>>>>>>> heap,
>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>> needed
>>>>>>>>>>>>>>>>>>> a sleep
>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>> 100ms
>>>>>>>>>>>>>>>>>>> between
>>>>>>>>>>>>>>>>>>> each
>>>>>>>>>>>>>>>>>>> 100k
>>>>>>>>>>>>>>>>>>> Buffer
>>>>>>>>>>>>>>>>>>> sent
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> bodygenerator
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 2013/8/29
>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>>>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> By
>>>>>>>>>>>>>>>>>>> chance
>>>>>>>>>>>>>>>>>>> do
>>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>> I can
>>>>>>>>>>>>>>>>>>> remove
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> MessageCloner
>>>>>>>>>>>>>>>>>>> used
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> SSL
>>>>>>>>>>>>>>>>>>> filter?
>>>>>>>>>>>>>>>>>>> SSLBaseFilter$OnWriteCopyCloner
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It
>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> allocate
>>>>>>>>>>>>>>>>>>> a lot
>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>> memory.
>>>>>>>>>>>>>>>>>>> I don't
>>>>>>>>>>>>>>>>>>> really
>>>>>>>>>>>>>>>>>>> understand
>>>>>>>>>>>>>>>>>>> why
>>>>>>>>>>>>>>>>>>> messages
>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>> cloned,
>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>> I remove
>>>>>>>>>>>>>>>>>>> this?
>>>>>>>>>>>>>>>>>>> How?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 2013/8/29
>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>> <lorber.sebastien_at_gmail.com
>>>>>>>>>>>>>>>>>>> <mailto:lorber.sebastien_at_gmail.com>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I'm
>>>>>>>>>>>>>>>>>>> trying
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>>>> a 500m
>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>> tests
>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>> a heap
>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>> 400m.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> In
>>>>>>>>>>>>>>>>>>> our
>>>>>>>>>>>>>>>>>>> real
>>>>>>>>>>>>>>>>>>> use
>>>>>>>>>>>>>>>>>>> cases
>>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>> would
>>>>>>>>>>>>>>>>>>> probably
>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>>>>>> under
>>>>>>>>>>>>>>>>>>> 20mo
>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>> want
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> reduce
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> memory
>>>>>>>>>>>>>>>>>>> consumption
>>>>>>>>>>>>>>>>>>> because
>>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>> x parallel
>>>>>>>>>>>>>>>>>>> uploads
>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> same
>>>>>>>>>>>>>>>>>>> server
>>>>>>>>>>>>>>>>>>> according
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> user
>>>>>>>>>>>>>>>>>>> activity.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I'll
>>>>>>>>>>>>>>>>>>> try
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> check
>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>> BodyGenerator
>>>>>>>>>>>>>>>>>>> reduced
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> memory
>>>>>>>>>>>>>>>>>>> footprint
>>>>>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>>> almost
>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>> before.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 2013/8/28
>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> At
>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>> point
>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>> time,
>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>> far
>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>> SSL
>>>>>>>>>>>>>>>>>>> buffer
>>>>>>>>>>>>>>>>>>> allocation
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> concerned,
>>>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>>> untunable.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> That
>>>>>>>>>>>>>>>>>>> said,
>>>>>>>>>>>>>>>>>>> feel
>>>>>>>>>>>>>>>>>>> free
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> open
>>>>>>>>>>>>>>>>>>> a feature
>>>>>>>>>>>>>>>>>>> request.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> As
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> your
>>>>>>>>>>>>>>>>>>> second
>>>>>>>>>>>>>>>>>>> question,
>>>>>>>>>>>>>>>>>>> there
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> no
>>>>>>>>>>>>>>>>>>> suggested
>>>>>>>>>>>>>>>>>>> size.
>>>>>>>>>>>>>>>>>>> This
>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>> all
>>>>>>>>>>>>>>>>>>> very
>>>>>>>>>>>>>>>>>>> application
>>>>>>>>>>>>>>>>>>> specific.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I'm
>>>>>>>>>>>>>>>>>>> curious,
>>>>>>>>>>>>>>>>>>> how
>>>>>>>>>>>>>>>>>>> large
>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>> a file
>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>>> sending?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I have
>>>>>>>>>>>>>>>>>>>> seen
>>>>>>>>>>>>>>>>>>>> a lot
>>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>> buffers
>>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>>> a size
>>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>> 33842
>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> limit
>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>> near
>>>>>>>>>>>>>>>>>>>> half
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> capacity.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Perhaps
>>>>>>>>>>>>>>>>>>>> there's
>>>>>>>>>>>>>>>>>>>> a way
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> tune
>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>> buffer
>>>>>>>>>>>>>>>>>>>> size
>>>>>>>>>>>>>>>>>>>> so
>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>> consumes
>>>>>>>>>>>>>>>>>>>> less
>>>>>>>>>>>>>>>>>>>> memory?
>>>>>>>>>>>>>>>>>>>> Is
>>>>>>>>>>>>>>>>>>>> there
>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>> ideal
>>>>>>>>>>>>>>>>>>>> Buffer
>>>>>>>>>>>>>>>>>>>> size
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> feed
>>>>>>>>>>>>>>>>>>>> method?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> 2013/8/28
>>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I'll
>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>> reviewing
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> PR
>>>>>>>>>>>>>>>>>>>> today,
>>>>>>>>>>>>>>>>>>>> thanks
>>>>>>>>>>>>>>>>>>>> again!
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Regarding
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> OOM:
>>>>>>>>>>>>>>>>>>>> as
>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>> stands
>>>>>>>>>>>>>>>>>>>> now,
>>>>>>>>>>>>>>>>>>>> for
>>>>>>>>>>>>>>>>>>>> each
>>>>>>>>>>>>>>>>>>>> new
>>>>>>>>>>>>>>>>>>>> buffer
>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>> passed
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> SSLFilter,
>>>>>>>>>>>>>>>>>>>> we
>>>>>>>>>>>>>>>>>>>> allocate
>>>>>>>>>>>>>>>>>>>> a buffer
>>>>>>>>>>>>>>>>>>>> twice
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> size
>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>> order
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> accommodate
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> encrypted
>>>>>>>>>>>>>>>>>>>> result.
>>>>>>>>>>>>>>>>>>>> So
>>>>>>>>>>>>>>>>>>>> there's
>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>> increase.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Depending
>>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> socket
>>>>>>>>>>>>>>>>>>>> configurations
>>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>> both
>>>>>>>>>>>>>>>>>>>> endpoints,
>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>> how
>>>>>>>>>>>>>>>>>>>> fast
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> remote
>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>> reading
>>>>>>>>>>>>>>>>>>>> data,
>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>> could
>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> write
>>>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>> becoming
>>>>>>>>>>>>>>>>>>>> too
>>>>>>>>>>>>>>>>>>>> large.
>>>>>>>>>>>>>>>>>>>> We
>>>>>>>>>>>>>>>>>>>> do
>>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>>> a way
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> detect
>>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>>> situation,
>>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>>> I'm
>>>>>>>>>>>>>>>>>>>> pretty
>>>>>>>>>>>>>>>>>>>> sure
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>> Grizzly
>>>>>>>>>>>>>>>>>>>> internals
>>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>>> currently
>>>>>>>>>>>>>>>>>>>> shielded
>>>>>>>>>>>>>>>>>>>> here.
>>>>>>>>>>>>>>>>>>>> I will
>>>>>>>>>>>>>>>>>>>> see
>>>>>>>>>>>>>>>>>>>> what
>>>>>>>>>>>>>>>>>>>> I can
>>>>>>>>>>>>>>>>>>>> do
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> allow
>>>>>>>>>>>>>>>>>>>> users
>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>> leverage
>>>>>>>>>>>>>>>>>>>> this.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I've
>>>>>>>>>>>>>>>>>>>>> made
>>>>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>>>> pull
>>>>>>>>>>>>>>>>>>>>> request.
>>>>>>>>>>>>>>>>>>>>> https://github.com/AsyncHttpClient/async-http-client/pull/367
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> With
>>>>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>>>> usecase
>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>> works,
>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>> uploaded
>>>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>>>> before.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> But
>>>>>>>>>>>>>>>>>>>>> I didn't
>>>>>>>>>>>>>>>>>>>>> notice
>>>>>>>>>>>>>>>>>>>>> a big
>>>>>>>>>>>>>>>>>>>>> memory
>>>>>>>>>>>>>>>>>>>>> improvement.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Is
>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>> possible
>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>> SSL
>>>>>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>>>>>> allow
>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>> stream
>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>> body
>>>>>>>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>>>>>>>> something
>>>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>>>> that?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> In
>>>>>>>>>>>>>>>>>>>>> memory,
>>>>>>>>>>>>>>>>>>>>> I have
>>>>>>>>>>>>>>>>>>>>> a lot
>>>>>>>>>>>>>>>>>>>>> of:
>>>>>>>>>>>>>>>>>>>>> - HeapByteBuffer
>>>>>>>>>>>>>>>>>>>>> Which
>>>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>>>> hold
>>>>>>>>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>>>>>> SSLUtils$3
>>>>>>>>>>>>>>>>>>>>> Which
>>>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>>>> hold
>>>>>>>>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>>>>>> BufferBuffers
>>>>>>>>>>>>>>>>>>>>> Which
>>>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>>>> hold
>>>>>>>>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>>>>>> WriteResult
>>>>>>>>>>>>>>>>>>>>> Which
>>>>>>>>>>>>>>>>>>>>> are
>>>>>>>>>>>>>>>>>>>>> hold
>>>>>>>>>>>>>>>>>>>>> by
>>>>>>>>>>>>>>>>>>>>> AsyncWriteQueueRecord
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Here
>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>>> exemple
>>>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>> OOM
>>>>>>>>>>>>>>>>>>>>> stacktrace:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> java.lang.OutOfMemoryError:
>>>>>>>>>>>>>>>>>>>>> Java
>>>>>>>>>>>>>>>>>>>>> heap
>>>>>>>>>>>>>>>>>>>>> space
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLUtils.allocateOutputBuffer(SSLUtils.java:342)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter$2.grow(SSLBaseFilter.java:117)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.ensureBufferSize(SSLConnectionContext.java:392)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrap(SSLConnectionContext.java:272)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLConnectionContext.wrapAll(SSLConnectionContext.java:227)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.wrapAll(SSLBaseFilter.java:404)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLBaseFilter.handleWrite(SSLBaseFilter.java:319)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.accurateWrite(SSLFilter.java:255)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ssl.SSLFilter.handleWrite(SSLFilter.java:143)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.client.providers.grizzly.GrizzlyAsyncHttpProvider$SwitchingSSLFilter.handleWrite(GrizzlyAsyncHttpProvider.java:2503)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.ExecutorResolver$8.execute(ExecutorResolver.java:111)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:853)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> org.glassfish.grizzly.filterchain.FilterChainContext.write(FilterChainContext.java:720)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.flushQueue(FeedableBodyGenerator.java:132)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.client.providers.grizzly.FeedableBodyGenerator.feed(FeedableBodyGenerator.java:101)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.client.providers.grizzly.MultipartBodyGeneratorFeeder$FeedBodyGeneratorOutputStream.write(MultipartBodyGeneratorFeeder.java:222)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.multipart.FilePart.sendData(FilePart.java:179)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.multipart.Part.send(Part.java:331)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.multipart.Part.sendParts(Part.java:397)
>>>>>>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>>>>>>> com.ning.http.client.providers.grizzly.MultipartBodyGeneratorFeeder.feed(MultipartBodyGeneratorFeeder.java:144)
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Any
>>>>>>>>>>>>>>>>>>>>> idea?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> 2013/8/27
>>>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Excellent!
>>>>>>>>>>>>>>>>>>>>> Looking
>>>>>>>>>>>>>>>>>>>>> forward
>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>> pull
>>>>>>>>>>>>>>>>>>>>> request!
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>>>>> thanks,
>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>> works
>>>>>>>>>>>>>>>>>>>>>> fine,
>>>>>>>>>>>>>>>>>>>>>> I'll
>>>>>>>>>>>>>>>>>>>>>> make
>>>>>>>>>>>>>>>>>>>>>> a pull
>>>>>>>>>>>>>>>>>>>>>> request
>>>>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>>>>>>>> tomorrow
>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>> a better
>>>>>>>>>>>>>>>>>>>>>> code
>>>>>>>>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> same
>>>>>>>>>>>>>>>>>>>>>> Part
>>>>>>>>>>>>>>>>>>>>>> classes
>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>> already
>>>>>>>>>>>>>>>>>>>>>> exist.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I created
>>>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>>>> OutputStream
>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>> redirects
>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> BodyGenerator
>>>>>>>>>>>>>>>>>>>>>> feeder.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> The
>>>>>>>>>>>>>>>>>>>>>> problem
>>>>>>>>>>>>>>>>>>>>>> I currently
>>>>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> feeder
>>>>>>>>>>>>>>>>>>>>>> feeds
>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>>>>>> faster
>>>>>>>>>>>>>>>>>>>>>> than
>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> async
>>>>>>>>>>>>>>>>>>>>>> thread
>>>>>>>>>>>>>>>>>>>>>> polling
>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>> :)
>>>>>>>>>>>>>>>>>>>>>> I need
>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>> expose
>>>>>>>>>>>>>>>>>>>>>> a limit
>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>>>>>>>>>> size
>>>>>>>>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>>>>>>>>> something,
>>>>>>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>>>>>> work
>>>>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>>>> that,
>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>> will
>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>> better
>>>>>>>>>>>>>>>>>>>>>> than
>>>>>>>>>>>>>>>>>>>>>> a thread
>>>>>>>>>>>>>>>>>>>>>> sleep
>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>> slow
>>>>>>>>>>>>>>>>>>>>>> down
>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>> filepart
>>>>>>>>>>>>>>>>>>>>>> reading
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> 2013/8/27
>>>>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Yes,
>>>>>>>>>>>>>>>>>>>>>> something
>>>>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>>>>> that.
>>>>>>>>>>>>>>>>>>>>>> I was
>>>>>>>>>>>>>>>>>>>>>> going
>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>> tackle
>>>>>>>>>>>>>>>>>>>>>> adding
>>>>>>>>>>>>>>>>>>>>>> something
>>>>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>>>>> today.
>>>>>>>>>>>>>>>>>>>>>> I'll
>>>>>>>>>>>>>>>>>>>>>> follow
>>>>>>>>>>>>>>>>>>>>>> up
>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>> something
>>>>>>>>>>>>>>>>>>>>>> you
>>>>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>>>>> test
>>>>>>>>>>>>>>>>>>>>>> out.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Ok
>>>>>>>>>>>>>>>>>>>>>>> thanks!
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I think
>>>>>>>>>>>>>>>>>>>>>>> I see
>>>>>>>>>>>>>>>>>>>>>>> what
>>>>>>>>>>>>>>>>>>>>>>> I could
>>>>>>>>>>>>>>>>>>>>>>> do,
>>>>>>>>>>>>>>>>>>>>>>> probably
>>>>>>>>>>>>>>>>>>>>>>> something
>>>>>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>>>>>> that:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> FeedableBodyGenerator
>>>>>>>>>>>>>>>>>>>>>>> bodyGenerator
>>>>>>>>>>>>>>>>>>>>>>> = new
>>>>>>>>>>>>>>>>>>>>>>> FeedableBodyGenerator();
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> MultipartBodyGeneratorFeeder
>>>>>>>>>>>>>>>>>>>>>>> bodyGeneratorFeeder
>>>>>>>>>>>>>>>>>>>>>>> = new
>>>>>>>>>>>>>>>>>>>>>>> MultipartBodyGeneratorFeeder(bodyGenerator);
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Request
>>>>>>>>>>>>>>>>>>>>>>> uploadRequest1
>>>>>>>>>>>>>>>>>>>>>>> = new
>>>>>>>>>>>>>>>>>>>>>>> RequestBuilder("POST")
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> .setUrl("url")
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> .setBody(bodyGenerator)
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> .build();
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> ListenableFuture<Response>
>>>>>>>>>>>>>>>>>>>>>>> asyncRes
>>>>>>>>>>>>>>>>>>>>>>> = asyncHttpClient
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> .prepareRequest(uploadRequest1)
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> .execute(new
>>>>>>>>>>>>>>>>>>>>>>> AsyncCompletionHandlerBase());
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> bodyGeneratorFeeder.append("param1","value1");
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> bodyGeneratorFeeder.append("param2","value2");
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> bodyGeneratorFeeder.append("fileToUpload",fileInputStream);
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> bodyGeneratorFeeder.end();
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Response
>>>>>>>>>>>>>>>>>>>>>>> uploadResponse
>>>>>>>>>>>>>>>>>>>>>>> = asyncRes.get();
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Does
>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>> seem
>>>>>>>>>>>>>>>>>>>>>>> ok
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> you?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I guess
>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>> could
>>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>> interesting
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> provide
>>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>>> MultipartBodyGeneratorFeeder
>>>>>>>>>>>>>>>>>>>>>>> class
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>>>>>>>>> or
>>>>>>>>>>>>>>>>>>>>>>> Grizzly
>>>>>>>>>>>>>>>>>>>>>>> since
>>>>>>>>>>>>>>>>>>>>>>> some
>>>>>>>>>>>>>>>>>>>>>>> other
>>>>>>>>>>>>>>>>>>>>>>> people
>>>>>>>>>>>>>>>>>>>>>>> may
>>>>>>>>>>>>>>>>>>>>>>> want
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> achieve
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> same
>>>>>>>>>>>>>>>>>>>>>>> thing
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> 2013/8/26
>>>>>>>>>>>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>>>>>>>>>>> Lubke
>>>>>>>>>>>>>>>>>>>>>>> <ryan.lubke_at_oracle.com
>>>>>>>>>>>>>>>>>>>>>>> <mailto:ryan.lubke_at_oracle.com>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Sébastien
>>>>>>>>>>>>>>>>>>>>>>> Lorber
>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I would
>>>>>>>>>>>>>>>>>>>>>>> like
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> know
>>>>>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>>>>>> it's
>>>>>>>>>>>>>>>>>>>>>>> possible
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> upload
>>>>>>>>>>>>>>>>>>>>>>> a file
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>>>>>>>>> / Grizzly
>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>> streaming,
>>>>>>>>>>>>>>>>>>>>>>> I mean
>>>>>>>>>>>>>>>>>>>>>>> without
>>>>>>>>>>>>>>>>>>>>>>> loading
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> whole
>>>>>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>>>>>> bytes
>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>> memory.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> The
>>>>>>>>>>>>>>>>>>>>>>> default
>>>>>>>>>>>>>>>>>>>>>>> behavior
>>>>>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> allocate
>>>>>>>>>>>>>>>>>>>>>>> a byte[]
>>>>>>>>>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>>>>>>>>>> contans
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> whole
>>>>>>>>>>>>>>>>>>>>>>> file,
>>>>>>>>>>>>>>>>>>>>>>> so
>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>> means
>>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>>>>>> server
>>>>>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>> OOM
>>>>>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>>>>>> too
>>>>>>>>>>>>>>>>>>>>>>> many
>>>>>>>>>>>>>>>>>>>>>>> users
>>>>>>>>>>>>>>>>>>>>>>> upload
>>>>>>>>>>>>>>>>>>>>>>> a large
>>>>>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> same
>>>>>>>>>>>>>>>>>>>>>>> time.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I've
>>>>>>>>>>>>>>>>>>>>>>> tryied
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> a Heap
>>>>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>> ByteBuffer
>>>>>>>>>>>>>>>>>>>>>>> memory
>>>>>>>>>>>>>>>>>>>>>>> managers,
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> reallocate=true/false
>>>>>>>>>>>>>>>>>>>>>>> but
>>>>>>>>>>>>>>>>>>>>>>> no
>>>>>>>>>>>>>>>>>>>>>>> more
>>>>>>>>>>>>>>>>>>>>>>> success.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> It
>>>>>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> whole
>>>>>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>>>>>> content
>>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> appended
>>>>>>>>>>>>>>>>>>>>>>> wto
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> BufferOutputStream,
>>>>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> underlying
>>>>>>>>>>>>>>>>>>>>>>> buffer
>>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> written.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> At
>>>>>>>>>>>>>>>>>>>>>>> least
>>>>>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>>>>>> seems
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> case
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>>>>>>>>> integration:
>>>>>>>>>>>>>>>>>>>>>>> https://github.com/AsyncHttpClient/async-http-client/blob/6faf1f316e5546110b0779a5a42fd9d03ba6bc15/providers/grizzly/src/main/java/org/asynchttpclient/providers/grizzly/bodyhandler/PartsBodyHandler.java
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> So,
>>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> there
>>>>>>>>>>>>>>>>>>>>>>> a way
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> patch
>>>>>>>>>>>>>>>>>>>>>>> AHC
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> stream
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>>>>>> so
>>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>>> I could
>>>>>>>>>>>>>>>>>>>>>>> eventually
>>>>>>>>>>>>>>>>>>>>>>> consume
>>>>>>>>>>>>>>>>>>>>>>> only
>>>>>>>>>>>>>>>>>>>>>>> 20mo
>>>>>>>>>>>>>>>>>>>>>>> of
>>>>>>>>>>>>>>>>>>>>>>> heap
>>>>>>>>>>>>>>>>>>>>>>> while
>>>>>>>>>>>>>>>>>>>>>>> uploading
>>>>>>>>>>>>>>>>>>>>>>> a 500mo
>>>>>>>>>>>>>>>>>>>>>>> file?
>>>>>>>>>>>>>>>>>>>>>>> Or
>>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>>>>>> simply
>>>>>>>>>>>>>>>>>>>>>>> impossible
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> Grizzly?
>>>>>>>>>>>>>>>>>>>>>>> I didn't
>>>>>>>>>>>>>>>>>>>>>>> notice
>>>>>>>>>>>>>>>>>>>>>>> anything
>>>>>>>>>>>>>>>>>>>>>>> related
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> documentation.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> It's
>>>>>>>>>>>>>>>>>>>>>>> possible
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> FeedableBodyGenerator.
>>>>>>>>>>>>>>>>>>>>>>> But
>>>>>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>>>>>> you're
>>>>>>>>>>>>>>>>>>>>>>> tied
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>>>>>>>>>> Multipart
>>>>>>>>>>>>>>>>>>>>>>> uploads,
>>>>>>>>>>>>>>>>>>>>>>> you'd
>>>>>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> convert
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> multipart
>>>>>>>>>>>>>>>>>>>>>>> data
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> Buffers
>>>>>>>>>>>>>>>>>>>>>>> manually
>>>>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>> send
>>>>>>>>>>>>>>>>>>>>>>> using
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> FeedableBodyGenerator.
>>>>>>>>>>>>>>>>>>>>>>> I'll
>>>>>>>>>>>>>>>>>>>>>>> take
>>>>>>>>>>>>>>>>>>>>>>> a closer
>>>>>>>>>>>>>>>>>>>>>>> look
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> see
>>>>>>>>>>>>>>>>>>>>>>> if
>>>>>>>>>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>>>>>>>>>> area
>>>>>>>>>>>>>>>>>>>>>>> can
>>>>>>>>>>>>>>>>>>>>>>> be
>>>>>>>>>>>>>>>>>>>>>>> improved.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Btw
>>>>>>>>>>>>>>>>>>>>>>> in
>>>>>>>>>>>>>>>>>>>>>>> my
>>>>>>>>>>>>>>>>>>>>>>> case
>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> a file
>>>>>>>>>>>>>>>>>>>>>>> upload.
>>>>>>>>>>>>>>>>>>>>>>> I receive
>>>>>>>>>>>>>>>>>>>>>>> a file
>>>>>>>>>>>>>>>>>>>>>>> with
>>>>>>>>>>>>>>>>>>>>>>> CXF
>>>>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>> have
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> transmit
>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> a storage
>>>>>>>>>>>>>>>>>>>>>>> server
>>>>>>>>>>>>>>>>>>>>>>> (like
>>>>>>>>>>>>>>>>>>>>>>> S3).
>>>>>>>>>>>>>>>>>>>>>>> CXF
>>>>>>>>>>>>>>>>>>>>>>> doesn't
>>>>>>>>>>>>>>>>>>>>>>> consume
>>>>>>>>>>>>>>>>>>>>>>> memory
>>>>>>>>>>>>>>>>>>>>>>> bevause
>>>>>>>>>>>>>>>>>>>>>>> it
>>>>>>>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>>>>>>>> streaming
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> large
>>>>>>>>>>>>>>>>>>>>>>> fle
>>>>>>>>>>>>>>>>>>>>>>> uploads
>>>>>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>>>>>>> file
>>>>>>>>>>>>>>>>>>>>>>> system,
>>>>>>>>>>>>>>>>>>>>>>> and
>>>>>>>>>>>>>>>>>>>>>>> then
>>>>>>>>>>>>>>>>>>>>>>> provides
>>>>>>>>>>>>>>>>>>>>>>> an
>>>>>>>>>>>>>>>>>>>>>>> input
>>>>>>>>>>>>>>>>>>>>>>> stream
>>>>>>>>>>>>>>>>>>>>>>> on
>>>>>>>>>>>>>>>>>>>>>>> that
>>>>>>>>>>>>>>>>>>>>>>> file.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>>>
>>>
>>
>
>