users@grizzly.java.net

Re: Associating executor with client request/connection

From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
Date: Thu, 09 Apr 2015 09:55:04 -0700

Hi Dan,

just thought of another possibility, what if you add some dumb header to
a request you send like:
"X-ahc-flow" with the flow name or id and the filter we were talking
about will extract the header from the Grizzly HttpRequestPacket and
assign the flow name to the connection.
Then your strategy will be able to pick it up from the connection and
process.

WDYT?

Thanks.

WBR,
Alexey.

On 09.04.15 06:34, Daniel Feist wrote:
> Hi,
>
> Agree, worker thread may not be needed especially if the flow in
> question simply implements a HTTP proxy. But because we allow users
> to combine HTTP with any other (potentially blocking) operations we
> must by default use a worker thread pool. Given this, it makes sense
> to use an executor-per-flow for tuning/prioritization and even for
> ensuring threads use a more specific name, than sharing a single
> worker pool for potentially 10's of Flows or more. That said, I plan
> to allow user to configure a SynhronousExecutor so that if they can
> choose to avoid the use of worker thread in scenarios where it it's
> needed and would be more performant to not use a worker thread.
>
> I considered the approach using a filter, but this is done once per
> HttpAsyncClient instance and it makes no sense to using a different
> HttpAsyncClient for each Flow, it's better to share client and pass
> executor via some other mechanism.
>
> Dan
>
>
> On Thu, Apr 9, 2015 at 1:37 AM, Oleksiy Stashok
> <oleksiy.stashok_at_oracle.com> wrote:
>> Hi,
>>
>> SameThreadIOStrategy, which AFAIR we use in AHC by default is pretty
>> effective assuming that request/response processing doesn't involve blocking
>> operations.
>> If I understand correctly, the approach you're suggesting adds extra thread
>> context switch (NIO-thread -> Worker-thread), which IMO could be redundant,
>> again this is true only in case request/response processing is all
>> non-blocking. On the other hand I understand your idea to isolate different
>> flows and off-load NIO threads, I suspect the overall throughput will be
>> decreased as the result of that, but it may help you prioritize flows, which
>> could be more important for you.
>>
>> I agree, AsyncHandler approach is not elegant at all, you may want to take a
>> look at Grizzly TransportCustomizer [1]. You can try to insert a Grizzly
>> Filter, which can intercept initial handleWrite() made by the
>> GrizzlyAsyncHttpProvider and assign a flow to a connection, based on Request
>> information.
>>
>> Thanks.
>>
>> WBR,
>> Alexey.
>>
>>
>> [1]
>> clientConfig.setAsyncHttpClientProviderConfig(getProviderConfig());
>>
>> @Override
>> protected AsyncHttpProviderConfig<?, ?> getProviderConfig() {
>> final GrizzlyAsyncHttpProviderConfig config = new
>> GrizzlyAsyncHttpProviderConfig();
>> config.addProperty(TRANSPORT_CUSTOMIZER, new TransportCustomizer() {
>> @Override
>> public void customize(TCPNIOTransport transport,
>> FilterChainBuilder builder) {
>> // insert a Filter
>> builder.add(new MyFilter());
>> }
>> });
>> return config;
>> }
>>
>>
>>
>> On 08.04.15 16:48, Daniel Feist wrote:
>>
>> Hi,
>>
>> It looks like that may work..
>>
>> I'd need to extend AsyncHandler and add a getWorkerExecutor() method to it.
>> Then in a custom 'FlowWorkerThreadPoolIOStrategy' grab the
>> HttpTransactionContext associated with the connection and then downcast the
>> AsyncHandler in order to obtain the Exectuor instance and use this instead
>> of the grizzly transport single shared worker thread pool.
>>
>> I'm not trying to change anything significant, or have AHC work any
>> differently to normaly, just be able to specific the worker Executor that
>> will be used to process respones. Why do I want to do this? Well, it's not
>> stricly required, but by doing this I save the context switching or parsing
>> the request in the selector thread and then doing the thread hand over later
>> on when there is now more state.
>>
>> The approach using AsyncHandler doesn't seem the most elegant, but other
>> than lots of signature changes I'm not sure of a better way that it would be
>> possible to pass this through.
>>
>> Make sense?
>>
>> Dan
>>
>> On Wed, Apr 8, 2015 at 11:47 PM, Oleksiy Stashok
>> <oleksiy.stashok_at_oracle.com> wrote:
>>> Hi Dan,
>>>
>>> may be it's a naive suggestion, did you think about have AHC AsyncHandler
>>> per flow? So each AsyncHandler is aware of flaw and knows how to dispatch
>>> the response.
>>> If it's not you're looking for - it would be great to have more context on
>>> what you're trying to achieve.
>>>
>>> Thanks.
>>>
>>> WBR,
>>> Alexey.
>>>
>>>
>>> On 07.04.15 18:13, Daniel Feist wrote:
>>>> Hi,
>>>>
>>>> Looking for ideas for how I might do this:
>>>>
>>>> - I have a single shared Grizzzly/AHC 'client' instance.
>>>> - This client can be used by multiple "flows" as it doesn't make sense to
>>>> have multiple instances of AHC each with it's own set of selectors.
>>>> - Each "flow" has it's own context, worker thread pool and error handling
>>>> etc.
>>>>
>>>> Because further processing may occur in a given "flow" after a callback
>>>> caused by a http response being recieved I want to be able to use the
>>>> "flow's" thread pool, and not i) SameThreadIOStrategy ii) A
>>>> WorkerThreadIOStrategy with a shared worker thread pool.
>>>>
>>>> I'm happy to create my own IOStrategy, thats a must, and I assume it's
>>>> easy enough to implement
>>>> org.glassfish.grizzly.strategies.AbstractIOStrategy#getThreadPoolFor() and
>>>> pull the Flow/Executor instance from the connection context via an
>>>> "Attribute".
>>>>
>>>> But where I'm stuck and could do with some ideas, is the best way to
>>>> associate a "flow" or "Executor" instance with a given request/connection in
>>>> the first place, so that I have it available in the connection context when
>>>> the response comes back in. Ideally I'd be able to do this without hacking
>>>> AHC GrizzlyHttpProvider, but if that's not possible then I might need to.
>>>>
>>>> thanks!
>>>>
>>>> Dan
>>>
>>