Hi Bongjae,
as I understand Mark is more concerned about timeout which occurs during
request processing (when single request takes too long time to be
processed), so IMO it's not directly related to HTTP keep-alive timeout.
> I doubt if "option 2)" is possible because I think the ping message from
> backend would affect original response. Are there any tricks? :)
Well, synchronization is the only trick I can think about :) So all the
threads, that want to send data to the same connection have to be
synchronized (their "send" logic).
> Separately, I would like to share a tip about using Apache + AJP13.
>
> When Grizzly or any backend container closed the AJP connection with idle
> timeout, you would always see CLOSE_WAIT in Apache.
>
> Grizzly's default idle timeout is 30 secs so if you set the idle timeout
> to be -1 with KeepAlive#setIdletimeoutInSeconds(-1), it would be helpful.
>
> Specially, if you shutdown the backend server immediately, you will also
> see CLOSE_WAITs in Apache.
> Connections will remain CLOSE_WAIT state until Apache get a chance to
> reuse invalid connections and close them.
>
> To resolve the problem, I didn't used mod_proxys but mod_jk with interval
> ping mode. (mod_proxys don't have interval ping mode yet)
IMO Grizzly is not an issue here, by setting request-timeout to -1 we
disable close logic for active connections, so the only side, that may
want to close "idle" connection is Apache. And the question is how to
make Apache think this connection is still active.
Thanks.
WBR,
Alexey.
>
> Thanks.
>
> Regards,
> Bongjae Chang
>
>
>
>
> On 6/28/12 9:42 PM, "Arens, Marc" <marc.arens_at_open-xchange.com> wrote:
>
>> Hello Oleksiy,
>>
>> thanks for your input. I'll see how i'll proceed implementing it.
>>
>>
>>
>> Oleksiy Stashok <oleksiy.stashok_at_oracle.com> hat am 26. Juni 2012 um 10:36
>> geschrieben:
>>
>>> forgot to include mailing list...
>>>
>>>
>>> -------- Original Message --------
>>> Subject: Re: http timeout/keep-alive behind balancer
>>> Date: Tue, 26 Jun 2012 09:54:21 +0200
>>> From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
>>> To: Arens, Marc <marc.arens_at_open-xchange.com>
>>>
>>>
>>>
>>> Hi Marc,
>>>
>>> thank you for the explanation, think I got it :)
>>> I see 2 options here:
>>>
>>> 1) Configure different proxies on Apache side, one w/ ProxyTimeout set
>>> to 100s for regular HTTP connections, another w/o ProxyTimeout for
>>> long-polling connections... and depending on URL you can chose proper
>>> proxy (with proper timeout settings) to forward your request. I'm not a
>>> big Apache expert and most probably you considered this option.
>>>
>>> 2) Send "ping" messages Grizzly -> Apache ->(?) Client.
>>> Grizzly Response object (and OutputStream) is not thread-safe, but if
>>> you synchronize access - it should work. So
>>>
>>> Thread #1 Thread #2
>>> | |
>>> begin synchronized(A) begin synchronized(A)
>>> | |
>>> outputStream.write(...) outputStream.write(...)
>>> | |
>>> outputStream.flush() outputStream.flush()
>>> | |
>>> end synchronized(A) end synchronized(A)
>>>
>>> should work.
>>>
>>> Regarding your question if it's possible to send such a "ping" message
>>> so it will reach Apache, but not a client... Not sure, may be it's
>>> possible to achieve this behavior using some Apache configuration
>>> tricks, otherwise IMO the "ping" message will always reach the client.
>>>
>>> Thanks.
>>>
>>> WBR,
>>> Alexey.
>>>
>> Best regards
>> --
>> Marc Arens
>> Backend Development
>> Open-Xchange AG
>>
>> Phone: +49 2761 8385-0, Fax: +49 2761 838530
>>
>> --------------------------------------------------------------------------
>> -----
>> Open-Xchange AG, Rollnerstr. 14, 90408 Nürnberg, Amtsgericht Nürnberg HRB
>> 24738
>> Vorstand: Rafael Laguna de la Vera, Carsten Dirks,
>> Aufsichtsratsvorsitzender:
>> Richard Seibt
>>
>> European Office: Open-Xchange GmbH, Martinstr. 41, D-57462 Olpe, Germany
>> Amtsgericht Siegen, HRB 8718, Geschäftsführer: Frank Hoberg, Martin Kauss
>>
>> US Office: Open-Xchange, Inc., 303 South Broadway, Tarrytown, New York
>> 10591
>> --------------------------------------------------------------------------
>> -----
>