users@grizzly.java.net

fragmented data. possible solution?

From: Carlos Alexandre Queiroz <caxqueiroz_at_gmail.com>
Date: Sat, 29 Mar 2008 11:46:25 +1100

Hi there,

Please, be nice with me as it is my first post on this list and I am
not an expert on grizzly -:).

I wrote a very simple client/server app based on the examples posted
on grizzly website. the app works fine when the client does not send
too many messages to the server. What I mean about too much is, I have
5, 10 threads on the client side with a infinite loop send messages
with no stop. I only have a Thread.yield() method in the end of the
loop. Using this approach I start to get errors on the server as the
message does not arrive entirely. I am sending a java object wrapped
into a bytebuffer, when I try to convert the message to the java
object I got the error. However, if I change the yield method to the
sleep method with times around 100, 200, or more, the messages arrive
on the server side nicely with no error. This behaviour lead me to
think that the server is not being able to process too many messages,
even though I've defined the pipeline max threads to 20, 30, etc, that
is higher than the threads set to the client.

Both apps (client and server) are running on the same machine. But, I
did some tests running on separate machines and I got the same
behaviour. I've posted the code below, please let me know what could
be changed to fix this behaviour.

Another question:
My client app opens the connection on the startup, then I start to
send messages. However, I've notice that after some time the
connection closes automatically, maybe some timeout. So, I have to
open the connection again.
Is there some way to redefine this timeout?


thanks in advance,


Below there are some snippets of my code:

Server side, defining the controller, the pipeline and the filters:

controller = new Controller();
tcpHandler = new TCPSelectorHandler();
tcpHandler.setPort(localDetector.getPort());
Pipeline pipeline = new DefaultPipeline();
pipeline.setMaxThreads(threads * 10);
pipeline.setMinThreads(5);
controller.setPipeline(pipeline);
tcpHandler.setProtocolChainInstanceHandler(new
DefaultProtocolChainInstanceHandler()
{
    final ProtocolChain protocolChain = new DefaultProtocolChain();
    public ProtocolChain poll()
    {
         protocolChain.addFilter(new ReadFilter());
         protocolChain.addFilter(new MessageFilter());
         return protocolChain;
     }

     public boolean offer(ProtocolChain instance)
     {
         return true;
     }
});
controller.addSelectorHandler(tcpHandler);
controller.start();


Message Filter code, handles the message:

 final WorkerThread workerThread = ((WorkerThread) Thread.currentThread());
 ByteBuffer buffer = workerThread.getByteBuffer();
 buffer.flip();
 byte[] data = new byte[buffer.remaining()];
 int position = buffer.position();
 buffer.get(data);
 buffer.position(position);
 ByteArrayInputStream bais = new ByteArrayInputStream(data);
 ObjectInputStream ois = new ObjectInputStream(bais);
 final SDiACMessage message = (SDiACMessage) ois.readObject();


Client side, sending the message:

 buf.flip();
 if (!executor.connectorHandler.isConnected())
  {
        executor.connectorHandler.close();
        executor.connectorHandler = (TCPConnectorHandler) cController
                       .acquireConnectorHandler(Controller.Protocol.TCP);
        executor.connectorHandler.connect(new
InetSocketAddress(executor.hostname,
                   executor.port));
  }
  long size = executor.connectorHandler.write(buf, true);

-- 
thanks,
Carlos Alexandre Queiroz