Hmm. Seems non-trivial even with this action I'd assume you'd need to
ensure:
- you only fork after requests is read
- you stop fork when response is ready to not read next request in fork.
This roughly what I'd need to do? Is it simpler than I'm imagining?
TBh, not sure if doing this is really justified, given the situation where
you have a lot of close_wait should be able to be avoided by keeping
timeouts similar and not having things like 10s client timout and 60s proxy
timout. Let me know if you have any thoughts on this though. .
Thanks,
Dan
On 8 Mar 2017 9:17 pm, "Ryan Lubke" <ryan.lubke_at_oracle.com> wrote:
> I left out the detail on suspending and using the non-blocking steam api.
>
> In this case, to signal we want to receive events, we call
> FilterChainContext.fork().
>
> Here is an excerpt about fork() from the docs:
>
> *ForkAction (was SuspendStopAction)*
>
> return ctx.getForkAction();
>
> This NextAction is very similar to SuspendAction
> <https://grizzly.java.net/filterchainfilters.html#suspend-action>, except
> for one important thing. After getting ForkAction, Grizzly will keep
> listening for the same I/O events on the Connection and notify FilterChain
> if they occur.
>
> Special care should be taken with this NextAction to ensure that two or
> more threads are not processing the same I/O operation simultaneously.
>
> Daniel Feist <dfeist_at_gmail.com>
> March 6, 2017 at 11:34
> Hi,
>
> That was my understanding as well, the only way to deal with this
> without going really low-level is to handle the "FIN" by performing a
> READ.
>
> I wasn't sure if there is anyway to allow that READ to happen, while
> request is still being processed though..?
>
> I'm not suspending requests no, also not using http server, just http
> framework.
>
> - How would i suspend/resume without http-server?
> - Anyhow, not sure what impact suspend/resume would have on which
> point the READ interest is enabled again? Seems thats not
> re-registered when thread is returned or request suspended, but rather
> when the request is complete via
> org.glassfish.grizzly.strategies.AbstractIOStrategy.
> EnableInterestLifeCycleListener#onComplete
>
> Dan
> Ryan Lubke <ryan.lubke_at_oracle.com>
> March 6, 2017 at 10:50
> Hi Daniel,
>
> From the JVM-level, there's doesn't appear to be any other way to detect
> this condition outside of
> trying to read/write from the socket. There are some low-level actions
> you can take to peek at the
> socket without taking reading any data, but these aren't exposed at our
> level.
>
> These long running HTTP requests. Do you suspend them?
>
>
>
>
> Daniel Feist <dfeist_at_gmail.com>
> March 3, 2017 at 10:32
> Hi,
>
> Any thoughts on how it might be possible with Grizzly to close server
> socket as soon as client prematurely closing by sending "FIN" rather
> than only closing it only after processing of the current READ event
> is complete?
>
> The issue I have is that in some use cases processing HTTP requests
> may take a long time and clients may timeout before request is
> complete. In this scenario a socket it left it CLOSE_WAIT for each
> premature client close until the request completes fully using
> unnecessary file descriptors during this time.
>
> I've looked at the code and both with SameThread and Worker
> IOStrategies the fact that future reads are delayed until current read
> is complete seems to be by design, and it makes sense apart from in
> this case.
>
> The HTTP spec says:
>
> "8.1.4 When a client or server wishes to time-out it SHOULD issue a
> graceful close on the transport connection. Clients and servers SHOULD
> both constantly watch for the other side of the transport close, and
> respond to it as appropriate. If a client or server does not detect
> the other side's close promptly it could cause unnecessary resource
> drain on the network."
>
> In a way, Grizzly isn't constantly watching for this situation,
> because it will only respond to client disconnection once any current
> processing is complete.
>
> Any thoughts on how to monitor and react to this more continuously to
> free up sockets quicker in this situation, or if you think the current
> behavior is sufficient.
>
> thanks,
> Dan
>
>
>