Hi Grizzly devs,
While implementing an example LDAP proxy application using our Grizzly
2.0 based LDAP API I ran into some surprising behavior.
Basically, in my dumb proxy example, it listens for incoming client
connections and, on connect, it creates an associated connection between
the proxy and the remote server. I am creating the connection
synchronously from within the Filter.handleAccept(FilterChainContext)
method. Unfortunately my example self-deadlocks because the handleAccept
method is being invoked by a SelectorRunner and the Future that is
returned from the connection attempt can never be closed, because the
selector needs to run in order to close it (on my machine there is only
one SelectorRunner by default).
I'm surprised by this because if I change the implementation a bit so
that connections are lazily created during invocation of a subsequent
Filter.handleRead event then everything works ok, simply because there
are more worker threads than selectors.
Of course, this approach is still flawed and a self-deadlock is still
possible. E.g. if there are N Grizzly worker threads and each performs N
synchronous IO operations then there will be no worker threads available
to service the IO events and close the Futures. The correct approach, I
assume (correct me if I am wrong) is to hand-off work like this to a
separate worker thread pool and leave Grizzly selectors/workers to
service just the IO events.
However, I do find this behavior surprising. I imagined that all
handleXXX methods would be processed by Grizzly worker threads. Is it
expected?
Cheers,
Matt