users@grizzly.java.net

Re: [Grizzly2]Code problem

From: draft <ridershome_at_gmail.com>
Date: Tue, 17 Mar 2009 07:44:48 -0700 (PDT)

Hi,

works for me now! great!



Oleksiy Stashok wrote:
>
> Hello guys,
>
> thank you for your feedback!!!
> I've just commited latest fixes and hope you'll not see those problems.
>
> Please let me know if you'll something.
>
> WBR,
> Alexey.
>
> On Mar 17, 2009, at 13:24 , draft wrote:
>
>>
>> Hi,
>>
>> Similar problem here as 2). I'm using grizzly2
>> (grizzly-framework-2.0.0-20090316.092908-660.jar) on linux with java
>> 1.6.0_10. After closing connection, the CPU is at 100%. My stack
>> track where
>> it runs endless looks as follows:
>>
>> Thread [Grizzly-WorkerThread(1) SelectorRunner] (Suspended)
>> Unsafe.unpark(Object) line: not available [native method]
>> LockSupport.unpark(Thread) line: 124
>>
>> ReentrantLock
>> $
>> NonfairSync
>> (AbstractQueuedSynchronizer
>> ).unparkSuccessor(AbstractQueuedSynchronizer$Node)
>> line: 626
>> ReentrantLock$NonfairSync(AbstractQueuedSynchronizer).release(int)
>> line:
>> 1178
>> ReentrantLock.unlock() line: 431
>> LinkedBlockingQueue<E>.signalNotEmpty() line: 107
>> LinkedBlockingQueue<E>.offer(E) line: 344
>> DefaultThreadPool(ThreadPoolExecutor).execute(Runnable) line: 653
>> DefaultThreadPool(AbstractExecutorService).submit(Runnable) line: 78
>> WorkerThreadExecutor.execute(Runnable) line: 62
>> TCPNIOTransport.executeProcessor(IOEvent, Connection, Processor,
>> ProcessorExecutor, PostProcessor) line: 699
>> TCPNIOTransport.executeDefaultReadWriteProcessor(IOEvent,
>> TCPNIOConnection)
>> line: 751
>> TCPNIOTransport.processReadIoEvent(IOEvent, TCPNIOConnection) line:
>> 721
>> TCPNIOTransport.fireIOEvent(IOEvent, Connection) line: 651
>> SelectorRunner.doSelect() line: 211
>> SelectorRunner.run() line: 152
>> ThreadPoolExecutor$Worker.runTask(Runnable) line: 886
>> ThreadPoolExecutor$Worker.run() line: 908
>> DefaultWorkerThread(Thread).run() line: 619
>>
>>
>> Regards,
>>
>> Thomas
>>
>>
>> MikoĊ‚aj Grajek wrote:
>>>
>>> 2.
>>> Second problem occurs when there is an abnormal client connection
>>> termination, while server is using TCPNIOTransportFilter.
>>> After closing, READ event on server is called, this leads to
>>> following
>>> code execution:
>>>
>>> Near line 100:
>>> TCPNIOTransportFilter.handleRead()
>>>
>>> buffer.clear();
>>> connection.readNow0(buffer, null);
>>>
>>> if (buffer.position() > 0) {
>>> buffer.flip();
>>> ctx.setMessage(buffer);
>>> } else {
>>> buffer.position(buffer.limit());
>>> connection.close();
>>> return new StopAction();
>>> }
>>>
>>> readNow0() throws IOException, that bypasses connection.close().
>>> And then another GrizzlyWorker tries to read from this closed
>>> connection, and the story repeats itself - this causes 100% CPU
>>> usage.
>>>
>>> Right now I have to add:
>>> try {
>>> connection.readNow0(buffer, null);
>>> } catch (IOException e) {
>>> buffer.position(buffer.limit());
>>> connection.close();
>>> return new StopAction();
>>> }
>>>
>> --
>> View this message in context:
>> http://www.nabble.com/-Grizzly2-Code-problem-tp22521861p22558048.html
>> Sent from the Grizzly - Users mailing list archive at Nabble.com.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
>> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: users-help_at_grizzly.dev.java.net
>
>
>

-- 
View this message in context: http://www.nabble.com/-Grizzly2-Code-problem-tp22521861p22560513.html
Sent from the Grizzly - Users mailing list archive at Nabble.com.