On 12/07/2011 10:29 PM, Bill Burke wrote:
>
>
> On 12/7/11 5:26 AM, Marek Potociar wrote:
>> Now I am not saying that we should address such scenarios right away or that we should expose our users to any complex
>> implementation details around fork-join or any other execution strategy. But this is just to demonstrate what we may
>> expect to deal with in the future, esp. if we, as implementors, want to leverage the parallel/async execution to gain
>> performance boosts. That should help us to produce design flexible enough to support these types of scenarios. If all
>> that we have to do right now is to convert an existing enum into an opaque interface and add bunch of action methods
>> into the FilterContext, then I wonder why don't we just do it?
>>
>
> Running filters in parallel for the same request seems pretty crazy to me. In fact, its kinda silly. The context
> switching/joining alone would kill any performance gains you made. Plus, any modicum of request concurrency will
> already max out the cores of your CPU(s). All this crazy I/O you're talking about would rarely (if ever) happen in a
> filter. It would happen in application code.
>
Are you suggesting that all the caching, logging, auth etc. which IMHO may typically happen in a filter does not involve
any I/O? Also, your idea that even a small amount of request concurrency necessarily leads to maxed CPU utilization is
just false. With blocking I/O and synchronous processing I can easily craft a DoS attack that will occupy all the
available threads and still the CPU will be just waiting idle most of the time.
Marek