users@glassfish.java.net

Downloading large files produces OutOfMemoryError

From: <glassfish_at_javadesktop.org>
Date: Tue, 06 Nov 2007 11:57:55 PST

Hello,

for testing the bandwidth of my server I downloaded some large files from it. I can reproduce that with a file size of about 100 MB glassfish runs out of heap space. This happens with glassfish on Linux with the out-of-the-box configuration.

This is the stack trace in server.log:

[#|2007-11-05T21:14:32.546+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=32;_ThreadName=httpSSLWorkerThread-8080-1;_RequestID=eb54a213-42ff-4431-8a46-195fa2aeb8e4;|java.lang.OutOfMemoryError: Java heap space
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
        at com.sun.enterprise.web.connector.grizzly.SocketChannelOutputBuffer.realWriteBytes(SocketChannelOutputBuffer.java:130)
        at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:851)
        at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:141)
        at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:626)
        at org.apache.coyote.Response.doWrite(Response.java:599)
        at org.apache.coyote.tomcat5.OutputBuffer.realWriteBytes(OutputBuffer.java:404)
        at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:417)
        at org.apache.coyote.tomcat5.OutputBuffer.doFlush(OutputBuffer.java:357)
        at org.apache.coyote.tomcat5.OutputBuffer.close(OutputBuffer.java:320)
        at org.apache.coyote.tomcat5.CoyoteResponse.finishResponse(CoyoteResponse.java:577)
        at org.apache.coyote.tomcat5.CoyoteAdapter.afterService(CoyoteAdapter.java:316)
        at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.postResponse(DefaultProcessorTask.java:582)
        at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:569)
        at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:813)
        at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:339)
        at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:261)
        at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:212)
        at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:361)
        at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265)
        at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106)
|#]

This is not a real problem for me currently as I don't have such large files, but nevertheless I find it strange that the file size is relevant.

Regards
Stephan
[Message sent by forum member 'smuehlst' (smuehlst)]

http://forums.java.net/jive/thread.jspa?messageID=244143