users@grizzly.java.net

SSL BUFFER_OVERFLOW problem

From: Tom Magowan <tom.magowan_at_googlemail.com>
Date: Tue, 21 Apr 2009 16:44:18 +0100

Hi,

I have created a very simple Grizzly-v1.9.14 SSL web server, containing a
single GrizzlyAdapter that reads all available bytes from the GrizzlyRequest
input stream.

GrizzlyWebServer ws = new GrizzlyWebServer(9998, ".", true);
...
ws.addGrizzlyAdapter(new GrizzlyAdapter() {

            @Override
            public void service(GrizzlyRequest request, GrizzlyResponse
response) {
                try {
                    InputStream is = request.getInputStream();
                    int b;
                    do {
                        byte[] bytes = new byte[8192];
                        b = is.read(bytes);
                        if (b != -1) {
                            count += b;
                        }
                    } while (b != -1);
                } catch (IOException ex) {
                }
            }
        }, new String[] { "/" });
ws.start();

I created a test file of around 17k and POSTed it (non-chunked) to grizzly
server using the "curl" command line http client.

Most of the time the GrizzlyAdapter reads all of the bytes posted from the
curl client . However, occasionally only part of the data is read, and
"is.read(bytes)" in my GrizzlyAdapter blocks for 30 seconds (the read
timeout) eventually returning -1.

I think I have tracked the issue to a problem with how Grizzly handles a
SSLEngine BUFFER_OVERFLOW. If the byte buffer associated with the
WorkerThread is not large enough to contain the decrypted data from the
SSLEngine, SSLUtils:214 allocates a larger one:

case BUFFER_OVERFLOW:
    byteBuffer = reallocate(byteBuffer);
    break;

However, it seems the WorkerThread is never updated with a reference to this
new byteBuffer. So the data in the larger reallocated byte buffer is 'lost'.
Therefore when my GrizzlyAdapter comes to read the bytes from the
InputStream, it can only read as much as was in the original WorkerThread
byte buffer. The underlying Grizzly implementation of the InputStream knows
that there should be more data available (from the value of the http
content-length header), so it creates a temporary selector to read the
data... but the data has already been read and discarded, so is.read(bytes)
sits there waiting until the read timeout is triggered.

This problem is very dependent on network conditions; a BUFFER_OVERFLOW only
happens occasionally. However, if I put a break point in the
SSLUtils.unwrap() method, this generally causes the encrypted input buffer
to fill up such that a BUFFER_OVERFLOW is triggered.

Is this a bug, or am I simply not configuring grizzly correctly? Issue
https://grizzly.dev.java.net/issues/show_bug.cgi?id=501 did seem similar,
but that was fixed in 1.9.11, and I am using 1.9.14.

Any help would be much appreciated!
Thanks,
Tom