Hi Alexey,
My query:
>> What I was wondering is how difficult would it be to capture more
>> information, such as the IP address of the sender, the amount of
>> data that
>> was read, and perhaps a hex dump of the first 100 bytes of data in
>> the log?
Your response:
>Not exactly, but we can set logger level for "grizzly" to FINE/FINER/
>FINEST to get more details.
>And set -Dcom.sun.grizzly.enableSnoop=true
Yes, we could set the enableSnoop, but I was afraid that might be too much
data
to log across the board. Plus, it has to get beyond the
ProcessorTask.parseRequest() [inputBuffer.parseRequestLine()]
before that would be effective, and we might have already gotten the
exception
before then. What I was really looking for is when the
ProcessorTask.parseRequest() [inputBuffer.parseRequestLine()]
throws an exception, catching that in the parseRequest() method and dumping
the data at that time (as a SEVERE error, since this is an anomaly
condition).
If we could get to the sender's IP address (perhaps available via the
inputBuffer.request object) and then hex dump the data at that time.
Does that make sense? This should help us pinpoint what kind of data is
being sent that causes the exception, and from which IP address.
My statement:
>> Our contacts at HP looked at some system info while the spike was
>> occurring and said that one of the Grizzly worker threads was going "out
>> of
>> control" on socket reads, reading a massive amount of data, and
>> subsequently
>> chewing up CPU and thrashing in GC. I've looked at the code and if I
>> understand it correctly Grizzly reads all of the data on the socket,
>> stashes
>> that in the input buffer (byte buffer), then starts to parse the input.
Your response:
>Right, but the data amount shouldn't be so "massive"... normally up-to
>8K.
Agreed. I think this is the case we are trying to identify. Why is so much
data being sent? And from where?
Also, just FYI, we set the grizzly log to FINE and ran a test in our QA
environment. We found a bunch of
java.io.IOException: Connection reset by peer (errno:232)
exceptions getting logged from the ReadFilter.execute() method from the
channel.read(bytebuffer) method. I suspect that this is happening because
the
client code is calling the disconnect() method of the HttpURLConnection
object.
I was wondering if this is bad coding...the JavaDoc on that object states to
use disconnect() if you do not expect to send any more data in the near
future.
I guess that is true in the sense that we are done with this transmission,
but
perhaps we should not make this call. Do you have any input on this?
Should
we not call disconnect()? Most of the code examples I have found do not
make
this method call. I must admit I'm not very familiar with sending HTTP
from Java, mostly consuming them in Java. I was concerned about leaving
open
connections to the HP box from our Glassfish application servers, but
perhaps
I was being over zealous.
Thanks again Alexey...Just FYI Grizzly has allowed us to eliminate a point
of
failure, move critical processing from an external server to the main
server,
simplify our deployment and our client's configuration, and load balance the
overall application. Overall, our application has become more stable,
robust
and scalable all thanks to Grizzly. I cannot overstate the benefits of it.
--Dennis
--
View this message in context: http://old.nabble.com/Grizzly-Web-Server-on-HP-UX-PA-RISC-Performance-Issue-tp26733514p27079261.html
Sent from the Grizzly - Users mailing list archive at Nabble.com.