Hi
I am running a production system and cannot wait another 3 months until
UR1 is released in jan 08. Are there no bug fix releases in between v2
and UR1 or a fix for this?
When did this bug first appear, was it in b45?
I think my options are:
1) Downgrade to v1 UR1 (which I really don't want to do, as all the
testing has been with v2)
2) Use a build after the v2 which fixes this bug (I think was v2 UR
build 5?) how stable is this?
3) Use another server (which also I don't want to do)
Glen
________________________________
From: Mark.Basler_at_Sun.COM [mailto:Mark.Basler_at_Sun.COM]
Sent: 24 October 2007 16:50
To: users_at_glassfish.dev.java.net
Subject: Re: OutOfMemoryError and performace issues
Hi Glen,
You can find Update 1 specifics at
http://wiki.glassfish.java.net/Wiki.jsp?page=PlanForGlassFishV2UR
Hope this helps - Thanks - Mark
Drinkwater, GJ (Glen) wrote:
Hi
Thanks, worked a treat. Whats the timescale for V2 UR1?
Glen
-----Original Message-----
From: Jeanfrancois.Arcand_at_Sun.COM
[mailto:Jeanfrancois.Arcand_at_Sun.COM]
Sent: 24 October 2007 16:03
To: users_at_glassfish.dev.java.net
Subject: Re: OutOfMemoryError and performace issues
Hi,
you are facing:
https://glassfish.dev.java.net/issues/show_bug.cgi?id=3683
Can you update to the current nightly build?
Thanks
-- Jeanfrancois
Drinkwater, GJ (Glen) wrote:
Hi
I am using glassfish v2 b58g to try and use a servlet to
download
large files, but I keep on getting an out of memory
error.
I try and execute concurrent requests (10 to 100) to
this servlet
which reads in a large file 20 to 50M and streams it to
the output.
The server keeps on taking up memory, up to the 550M I
give it in byte
arrays[], which is linked to the
com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run()
when I profile it. Also, the more I increase the
Request Processing
thread count the quicker it runs out of memory (ie from
5 to 10, 20
etc). All the Eden, Survivor and Tenured heap space it
taken up, and
never reclaimed, even after the requests have stopped.
I noticed the memory go up from running around 100M to
247M when
requested a download on a 60M file?
When I try the same code on tomcat 6.0.14, give it 50
max in the http
thread pool, the memory footprint never exceeds 7M?
Also, the
performace compared to tomcat is very poor, even if I
try out the tips
in
http://weblogs.java.net/blog/jfarcand/archive/2007/03/configuring_gri_2.
html
Can anyone help?
Simple code in the servlet doGet():
byte[] bbuf = new byte[1024];
FileInputStream filein =new
FileInputStream(file);
DataInputStream in = new
DataInputStream(filein);
while ((in != null) && ((length =
in.read(bbuf)) != -1))
{
op.write(bbuf,0,length);
}
filein.close();
in.close();
op.flush();
op.close();
Thanks Glen
---------------------------------------------------------------------
To unsubscribe, e-mail:
users-unsubscribe_at_glassfish.dev.java.net
For additional commands, e-mail:
users-help_at_glassfish.dev.java.net
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
For additional commands, e-mail:
users-help_at_glassfish.dev.java.net
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
For additional commands, e-mail:
users-help_at_glassfish.dev.java.net