dev@grizzly.java.net

Grizzly 2.0: Echo sample

From: Oleksiy Stashok <Oleksiy.Stashok_at_Sun.COM>
Date: Tue, 02 Sep 2008 17:00:11 +0200

Hi,

wanted to share the simple Grizzly 2.0 sample (echo sample) and will
very appreciate the feedback or better contributions :) to the
proposed design.
The complete sources are attached, though to orient better I'll
briefly describe what is what :)

Let's start from the server.
1)
         TCPNIOTransport transport =
TransportManager.instance().createTCPTransport();

         // Add TransportFilter, which is responsible
         // for reading and writing data to the connection
         transport.getFilterChain().add(new TransportFilter());
         transport.getFilterChain().add(new EchoFilter());

         try {
             // binding transport to start listen on certain host and
port
             transport.bind(HOST, PORT);

             // start the transport
             transport.start();

             System.out.println("Press any key to stop the server...");
             System.in.read();
         } finally {
             // stop the transport
             transport.stop();

             // release TransportManager resources like ThreadPool
             TransportManager.instance().close();
         }
     }

here we just initialize the TCP transport, adding 2 filters:
TransportFilter and EchoFilter, binding the server to the specific
host and port and... that's it :)
It's similar to what we have in 1.x, except may be new term
TransportFilter.
TransportFilter is very similar to ReadFilter from Grizzly 1.x, so it
knows how to read/write the data depending on the transport. Each
transport can implement its own read/write logic, in this case
TransportFilter will silently forward the control the the specific
implementation. In our case TCPNIOTransport has own read/write logic
implementation, which is hidden from us, and TransportFilter uses it
in order to read/write the bytes on the filter chain.

2) How does the EchoFilter look like.

     /**
      * Handle just read operation, when some message has come and
ready to be
      * processed.
      *
      * @param ctx Context of {_at_link FilterChainContext} processing
      * @param nextAction default {_at_link NextAction} filter chain will
execute
      * after processing this {_at_link Filter}. Could
be modified.
      * @return the next action
      * @throws java.io.IOException
      */
     @Override
     public NextAction handleRead(FilterChainContext ctx, NextAction
nextAction)
             throws IOException {
         // Get the read message
         Object message = ctx.getMessage();

         /* Send the same message on the connection. The filter chain
write
          * will pass each filter on a filter chain (interceptWrite
method),
          * before sending the message on a wire. It means each filter
can modify
          * the message, before it will be sent to the recipient
          */
         ctx.getFilterChain().write(ctx.getConnection(), message);

         return nextAction;
     }

common interface for all filters is Filter, which if very similar to
the ProtocolFilter from Grizzly 1.x, but, IMHO, it's more suitable to
extend FilterAdapter class, which lets you separate the logic. For
example in case of EchoFilter we just want to handle read operation
and not interested in others.
You can note how we write the response, using the filter chain, not to
the connection directly. It means that all the Filters in the chain
can intercept the write operation and transform the writing buffer
before TransportFilter will put it on a wire. In our case it looks
redundant, because we don't have any Filter, which could be interested
in transforming the response, and
"ctx.getConnection().write(message);" could be used instead. But as
common solution filterchain.write is better.


3) Client

         Connection connection = null;

         // Create the TCP transport
         TCPNIOTransport transport = TransportManager.instance().
                 createTCPTransport();

         try {
             // start the transport
             transport.start();

             // perform async. connect to the server
             ConnectFuture future =
transport.connectAsync(EchoServer.HOST,
                     EchoServer.PORT);
             // wait for connect operation to complete
             connection = (TCPNIOConnection) future.get(10,
TimeUnit.SECONDS);

             assert connection != null;

             // create the message
             ByteBuffer message = ByteBuffer.wrap("Echo
test".getBytes());
             // sync. write the complete message using
             // temporary selectors if required
             WriteResult result = connection.write(message);

             assert result.getWrittenSize() == message.capacity();

             // allocate the buffer for receiving bytes
             ByteBuffer receiverBuffer =
ByteBuffer.allocate(message.capacity());

             ReadResult readResult = null;
             // read the same number of bytes as we sent before
             while (receiverBuffer.hasRemaining() &&
                     (readResult == null || readResult.getReadSize() >
0)) {
                 readResult = connection.read(receiverBuffer);
             }

             // check the result
             assert message.flip().equals(receiverBuffer.flip());
         } finally {
             // close the client connection
             if (connection != null) {
                 connection.close();
             }

             // stop the transport
             transport.stop();
             // release TransportManager resources like ThreadPool etc.
             TransportManager.instance().close();
         }

it does similar things as server to initialize/release the transport
and TransportManager.
Also you can find, how client is getting connected to the server,
writes and reads data.
Want to note, that currently there is no async read and write
implementations in Grizzly 2.0 (it will be ready soon), so all reads
and writes are working sync., using temporary selectors if required.

Will appreciate your feedback.

Thanks.

WBR,
Alexey.