Hi Alexey,
Two weeks or so is wonderful. What we are looking for is a way to determine
if what was sent in the
request was valid HTTP traffic (for us) or not. What appears to be
happening is that data is hitting
Grizzly but is not getting forwarded to our application URI, probably
because of invalid HTTP data.
Let me know what you think of this...
Change Request.toString() to add the remote IP address (remoteAddr). Then
when request is logged, it
will get the remoteAddr info as well (be sure to set the remoteAddr as soon
as possible so it can be
available when needed).
In ProcessorTask.parseRequest, log the contents of the input buffer (in
addition to the other data logged)
if SelectorThread.isEnableNioLogging() (at level INFO) as a hex string.
If there are any exceptions in the ProcessorTask.parseRequest caught and
isEnableNioLogging(), log the
request object (level SEVERE). In addition, log any input buffer data that
was read as a hex string (level
INFO).
I'd rather not take the hit on all the isEnableNioLogging() output data, but
will do so if necessary. Or,
you could add a new flag (isEnableNioInputLogging() for example) to handle
the above so we won't
get any of the output data logged unless the isEnableNioLogging() flag is
also set.
Is this enough information for you to go on? Does it make sense? Can you
think of any other places
where this might be helpful?
Let me know if you need me to clarify anything else, or if you need any more
details.
Thanks Alexey...
--Dennis
Oleksiy Stashok wrote:
>
> Hi Dennis,
>
>> Yes, the changes noted below in the 1.9.19 version will be great!
>>
>> Do you have a timeline for this?
> Originally we planned to align 1.9.19 with Glassfish 3.1, but we can
> make one more extra release, let's say in a week or 2 to test latest
> thread pool changes.
> So 1-2 weeks.
> Can I ask you to fill out the issue describing new logging requirements?
>
> Thank you.
>
> WBR,
> Alexey.
>>
>> We would like to use the first "released" version of 19. Will 1.9.19
>> be
>> available within the next month or so? We don't want to deliver 19
>> to our
>> clients until it is ready for general delivery.
>>
>> Thanks again for all your help!
>> --Dennis
>>
>>
>>
>> Oleksiy Stashok wrote:
>>>
>>> Hi Dennis,
>>>
>>>> My query:
>>>>>> What I was wondering is how difficult would it be to capture more
>>>>>> information, such as the IP address of the sender, the amount of
>>>>>> data that
>>>>>> was read, and perhaps a hex dump of the first 100 bytes of data in
>>>>>> the log?
>>>>
>>>> Your response:
>>>>> Not exactly, but we can set logger level for "grizzly" to FINE/
>>>>> FINER/
>>>>> FINEST to get more details.
>>>>> And set -Dcom.sun.grizzly.enableSnoop=true
>>>>
>>>> Yes, we could set the enableSnoop, but I was afraid that might be
>>>> too much
>>>> data
>>>> to log across the board. Plus, it has to get beyond the
>>>> ProcessorTask.parseRequest() [inputBuffer.parseRequestLine()]
>>>> before that would be effective, and we might have already gotten the
>>>> exception
>>>> before then. What I was really looking for is when the
>>>> ProcessorTask.parseRequest() [inputBuffer.parseRequestLine()]
>>>> throws an exception, catching that in the parseRequest() method and
>>>> dumping
>>>> the data at that time (as a SEVERE error, since this is an anomaly
>>>> condition).
>>>> If we could get to the sender's IP address (perhaps available via
>>>> the
>>>> inputBuffer.request object) and then hex dump the data at that time.
>>>>
>>>> Does that make sense? This should help us pinpoint what kind of
>>>> data is
>>>> being sent that causes the exception, and from which IP address.
>>> Yes, sure. That's why I answered "not exactly :)". We can add support
>>> for those stats and make them available on Grizzly 1.9.19-SNAPSHOT
>>> branch. Will it work for you?
>>>
>>>
>>>>>> Our contacts at HP looked at some system info while the spike was
>>>>>> occurring and said that one of the Grizzly worker threads was
>>>>>> going "out
>>>>>> of
>>>>>> control" on socket reads, reading a massive amount of data, and
>>>>>> subsequently
>>>>>> chewing up CPU and thrashing in GC. I've looked at the code and
>>>>>> if I
>>>>>> understand it correctly Grizzly reads all of the data on the
>>>>>> socket,
>>>>>> stashes
>>>>>> that in the input buffer (byte buffer), then starts to parse the
>>>>>> input.
>>>>
>>>> Your response:
>>>>> Right, but the data amount shouldn't be so "massive"... normally
>>>>> up-
>>>>> to
>>>>> 8K.
>>>>
>>>> Agreed. I think this is the case we are trying to identify. Why is
>>>> so much
>>>> data being sent? And from where?
>>>>
>>>> Also, just FYI, we set the grizzly log to FINE and ran a test in our
>>>> QA
>>>> environment. We found a bunch of
>>>> java.io.IOException: Connection reset by peer (errno:232)
>>>> exceptions getting logged from the ReadFilter.execute() method from
>>>> the
>>>> channel.read(bytebuffer) method. I suspect that this is happening
>>>> because
>>>> the
>>>> client code is calling the disconnect() method of the
>>>> HttpURLConnection
>>>> object.
>>>> I was wondering if this is bad coding...the JavaDoc on that object
>>>> states to
>>>> use disconnect() if you do not expect to send any more data in the
>>>> near
>>>> future.
>>>> I guess that is true in the sense that we are done with this
>>>> transmission,
>>>> but
>>>> perhaps we should not make this call. Do you have any input on
>>>> this?
>>>> Should
>>>> we not call disconnect()? Most of the code examples I have found do
>>>> not
>>>> make
>>>> this method call. I must admit I'm not very familiar with sending
>>>> HTTP
>>>> from Java, mostly consuming them in Java. I was concerned about
>>>> leaving
>>>> open
>>>> connections to the HP box from our Glassfish application servers,
>>>> but
>>>> perhaps
>>>> I was being over zealous.
>>> This is expected behavior. Your code is absolutely correct.
>>>
>>>
>>>> Thanks again Alexey...Just FYI Grizzly has allowed us to eliminate a
>>>> point
>>>> of
>>>> failure, move critical processing from an external server to the
>>>> main
>>>> server,
>>>> simplify our deployment and our client's configuration, and load
>>>> balance the
>>>> overall application. Overall, our application has become more
>>>> stable,
>>>> robust
>>>> and scalable all thanks to Grizzly. I cannot overstate the benefits
>>>> of it.
>>> Nice to hear that :)
>>>
>>> Thanks.
>>>
>>> WBR,
>>> Alexey.
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>>
>>>
>>>
>>
>> --
>> View this message in context:
>> http://old.nabble.com/Grizzly-Web-Server-on-HP-UX-PA-RISC-Performance-Issue-tp26733514p27119987.html
>> Sent from the Grizzly - Users mailing list archive at Nabble.com.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>
>
>
--
View this message in context: http://old.nabble.com/Grizzly-Web-Server-on-HP-UX-PA-RISC-Performance-Issue-tp26733514p27131968.html
Sent from the Grizzly - Users mailing list archive at Nabble.com.