Hi,
I tried to test and investigate it again.
Fortunately, the unexpected result may be caused by LinkedTransferQueue's statistic.
In conclusion, LinkedTransferQueue#size() has a lot of performance overhead.
Here is LinkedTransferQueue#size()'s javadoc.
/**
* Returns the number of elements in this queue. If this queue
* contains more than {_at_code Integer.MAX_VALUE} elements, returns
* {_at_code Integer.MAX_VALUE}.
*
* <p>Beware that, unlike in most collections, this method is
* <em>NOT</em> a constant-time operation. Because of the
* asynchronous nature of these queues, determining the current
* number of elements requires an O(n) traversal.
*
* @return the number of elements in this queue
*/
In my test code, I didn't use PipelineThreadPool#getQueueSize() but PipelineThreadPool.getQueue().size() directly.
So, I think that maybe my test was wrong and just now I could know why PipelineThreadPool had a seperate queueSize. :-)
I am very sorry for not reading LinkedTransferQueue#size()'s javadoc.
But I think that LinkedTransferQueue#size() should be improved later because my 4 CPUs kept up 100% for a long time like infinite loop when I tested.
And I think that a common user could use it adequately like me.
Of course, it has no problem when I don't use LinkedTransferQueue#size() for statistics.
*My test environment and scenario
- Windows Server 2003 32bit
- CPU: Intel Core2 Quad CPU @ 2.40Ghz
- Mem: 3.37GB Ram
- JDK: 1.7.0-ea-b56
- I used 150 threads and they executes a task which has runningTime with launchTime concurrently.
- When 60000 tasks will be run, total elapse time will be printed and the test will be finished.
- I printed queuesize and elapse time whenever 500 tasks were run for intermediate statistics.
I attached test code and result.
I expect that you can reproduce this easily if you configure test values according to your system's environment.
Thanks!
PS)
The problem code is like this.
---
System.out.println( "### [" + currentStartedTaskCount + "] " + "queue size = " + pipelineThreadPool.getQueue().size() + ", elapse time = " + ( currentTime - startTime ) + "ms ###" );
---
--
Bongjae Chang
----- Original Message -----
From: Bongjae Chang
To: dev_at_grizzly.dev.java.net
Sent: Wednesday, May 27, 2009 10:15 PM
Subject: Re: About issue #623(PipelineThreadPool)'s performance testing
Hi Alexey,
I agree with your words and [1] is very interesting!
So, I will attach my test code on tomorrow morning after investigating and verifying my test again.
Thanks.
--
Bongjae Chang
----- Original Message -----
From: Oleksiy Stashok
To: dev_at_grizzly.dev.java.net
Sent: Wednesday, May 27, 2009 7:17 PM
Subject: Re: About issue #623(PipelineThreadPool)'s performance testing
Hi,
Jeanfrancois wrote:
>Once applied, I will go ahead and ask internally for performance testing.
At present, I tried to test PipelineThreadPool's performance curiously.
I just tested only PipelineThreadPool without grizzly's dependency simply.
I could know that LinkedTransferQueue had an big effect on performace.
When I changed the LinkedTransferQueue into ArrayBlockingQueue or another BlockingQueue which had special queue size, I could find that the performance was improved seriously.
Most of PipelineThreadPool's constructors set default LinkedTransferQueue. So, I am curious to know why LinkedTransferQueue was made on purpose.
Thank you for the investigation.
IMHO, results could depend much on environment, where tests are running, and specific scenario. Because developer, who made that switch to LTQ, also spent some time on investigating the performance [1].
Can you pls. share the testing environment and code?
Thank you.
WBR,
Alexey.
[1]
http://www.nabble.com/2x-improved-instance-caching-throughput-td21460586.html#a21460586
FYI.
Thanks!
--
Bongjae Chang