users@grizzly.java.net

Re: Thread safe method for setting the processor on a new client/server connection?

From: Oleksiy Stashok <oleksiy.stashok_at_oracle.com>
Date: Tue, 09 Oct 2012 17:21:01 +0200

Hi Matt,

just to make sure, you call

TCPNIOServerConnection serverConnection = transport.bind(address, backlog);

after you started the actual transport? If yes - pls. file an issue :))

Regarding the client connections, you can set FilterChain to
TCPNIOConnectorHandler like:

             SocketConnectorHandler connectorHandler =
                     TCPNIOConnectorHandler.builder(transport)
                     .processor(*clientFilterChain*)
                     .build();

             Future<Connection> future =
connectorHandler.connect("localhost", PORT);


Thanks.

WBR,
Alexey.

On 10/09/2012 04:55 PM, Matthew Swift wrote:
> Hi there,
>
> Now a "real" IO question and not a question about boring old HTTP
> servlets! :-)
>
> In our Grizzly based LDAP library I want to give the application
> developer the choice as to whether they use a default "global"
> TCPNIOTransport or provide their own custom transport. In order to
> support this I need to be able to install my own filter chain
> (processor) rather than use the one provided by the TCPNIOTransport
> (in fact I was hoping to use the same transport for multiple protocols
> and we already use it for LDAP, LDAP+SSL, and LDAP+SASL). Another
> thing worth noting is that client-server connections need to have
> their own filter chain since the filter chain may change over time on
> a per-connection basis (e.g. if SSL is installed dynamically using
> StartTLS, or SASL confidentiality/integrity layers are installed post
> authentication).
>
> When creating server connections we do this:
>
> FilterChain filterChain = FilterChainBuilder.stateless()
> .add(new TransportFilter())
> .add(new LDAPServerFilter(...))
> .build();
> TCPNIOServerConnection serverConnection = transport.bind(address,
> backlog);
> serverConnection.setProcessor(filterChain);
>
> Unfortunately, sometimes our unit tests are hanging because, I
> think[1], there is a race condition in the above code. Specifically,
> there are two problems with our attempt to use setProcessor():
>
> 1. the processor is not synchronized (or volatile) and therefore may
> not be published to the selector thread when incoming connections
> are accepted
> 2. even if it was synchronized in some way, there is still a race
> condition where an incoming connection may be accepted before
> setProcessor() is called.
>
> I think that we have a similar but less severe issue on the client
> side where we are setting the processor during the connection
> completion handler (I think the only risk here is (1) above).
>
> Do you have any advice? Am I abusing Grizzly or missing an obvious API
> somewhere? Basically, I think that I need to be able to atomically set
> the processor during the bind, rather like
> TCPNIOTransport.obtainServerNIOConnection(ServerSocketChannel) is
> doing today.
>
> Thanks in advance for any help,
>
> Matt
> [1] I say "think" because the unit tests only fail in our Jenkins
> environment and all attempts to add debugging result in timing changes
> and/or additional memory barriers which cause the tests to pass! :-(
>