Hi Alexey,
Thanks for the quick reply and your suggestion on the AsyncWriteQueue
- I will definitely need that.
I think there may still be an issue here though:
The client is indeed sending data to the tunnel server at a high rate
- around 700Mbit/s. The iperf consumer is only receiving data at about
150Mbit/s from the Grizzly tunnel before the OOMError. As you say,
this demonstrates that there is a clear bottleneck somewhere between
the tunnel server and iperf consumer.
In your reply you mention that the server that the "consumers <snip>
are not able to process data at that rate". However, if I replace the
Grizzly tunnel with a simple sockets-based tunnel (with a couple of
threads for repeating data in each direction) I am able to comfortably
hit 700Mbit/s between the tunnel server and the iperf consumer. This
suggests to me that the bottleneck is the outbound Grizzly connection,
and not the consumer.
Thanks,
Sam
On 28 November 2011 22:37, Oleksiy Stashok <oleksiy.stashok_at_oracle.com> wrote:
> Hi Sam,
>
> On 11/28/2011 10:24 PM, Sam Crawford wrote:
>>
>> Hello,
>>
>> I'm attempting to run a basic throughput benchmark of the TunnelServer
>> sample
>> (http://java.net/projects/grizzly/sources/svn/content/branches/2dot0/code/samples/framework-samples/src/main/java/org/glassfish/grizzly/samples/tunnel/TunnelServer.java)
>> and I'm running into an OutOfMemoryError after ~5 seconds.
>>
>> I'm running the TunnelServer sample (unmodified, apart from the
>> host/port) in Eclipse, with JVM parameters -Xmx1024m
>> -XX:MaxPermSize=256m. I'm using Grizzly 2.1.7
>>
>> The test is being conducted with iperf across a gigabit network.
>> Commands used are below:
>>
>> Server: iperf -s -i 1
>> Client: iperf -t 90 -i 1 -c grizzlytunnel.example.com
>>
>> Any suggestions would be appreciated.
>
> Most probably you're sending lots of data to the Tunnel server very fast.
> The consumers, the server forwards data to, are not able to process data at
> that rate, so asynchronous write queue on TunnelServer connections is
> constantly growing and finally eats all the available memory.
>
> The easy way to fix this is to limit max asynchronous write queue of
> TunnelServer <-> Consumer connections (by default there is no limit) like:
>
> final AsyncQueueWriter<SocketAddress> asyncQueueWriter =
> transport.getAsyncQueueIO().getWriter();
> asyncQueueWriter.setMaxPendingBytesPerConnection(queueLimit);
>
> By setting the limit - Grizzly will always check the asynchronous write
> queue size and if it gets overloaded - connection.write(...) will throw
> Exception.
> So you'll need to decide what to do with the connections, which are not able
> to operate at the required rate... close them, store received data to a file
> and forward it once async write queue is able to accept more data.
>
> You can register async write queue monitor to get updates on async write
> queue size changes, something like [1] (probably we can make this API more
> clear).
>
>
> Pls. let us know if it helped.
>
> Thanks.
>
> WBR,
> Alexey.
>
> [1]
> final int bytesWeWantToWrite = <NN>;
> final int maxAsyncWriteQueueSize =
> asyncQueueWriter.getMaxPendingBytesPerConnection();
>
> final TaskQueue taskQueue = ((NIOConnection) c).getAsyncWriteQueue();
>
> monitor = new TaskQueue.QueueMonitor() {
>
> @Override
> public boolean shouldNotify() {
> // Async write queue size was changed, check if it's enough
> for us
> return ((maxAsyncWriteQueueSize - taskQueue.spaceInBytes())
>>= bytesWeWantToWrite);
> }
>
> @Override
> public void onNotify() throws IOException {
> // Async write queue is ready to accept bytesWeWantToWrite
> ................ WRITE HERE .............
> }
> };
>
> taskQueue.setQueueMonitor(monitor);
>
>
>>
>> Thanks,
>>
>> Sam
>
>