charlie hunt wrote:
> John ROM wrote:
>> ...[stuff zapped]...
>> I think this is great because grizzly should always try to take
>> advantage of multicore.
>>
>> DynamicPool.start() pool uses
>> Runtime.getRuntime().availableProcessors()
>> to init Pool.
>> Maybe one should use code like this in furture more often in grizzly?
>>
>>
>
> We should be careful with Runtime.getRuntime().availableProcessors()
> since CMT (chip multi-threaded) cores will report each hardware thread
> as a virtual processor. For example, a Sun T2000 with 8 cores and 8
> hardware threads per core will result
> Runtime.getRuntime().availableProcessors() returning 64 processors.
> Out of those 64 available processors, only 2 hardware threads per core
> can execute on a given clock cycle, (there's two pipelines per core).
> So, you may have a few more available threads in the pool than you
> really wanted / intended.
That's actually code copied over from the RubyObjectPool, so I can't
take full credit for it. It did, however, seem like running as many
creations in parallel as possible was a good idea at the beginning.
That pool only lives during the pool startup, so excessive threads there
will only affect the startup performance, and because thread
creation/deletion is relatively fast (at least compared to Ruby startup
time), there shouldn't be much of an issue if 64 threads are started but
only 16 of them can be executing at a time. The main DynamicPool uses a
CachedThreadPool for its object generation and lives in the client
thread for borrowing/returning objects, so those 64 threads will die off.
One improvement that I was strongly considering, but didn't stick in
because I was focusing on getting the dynamic pooling to work well, was
doing a call to Math.min and only starting threads equal to the number
of initial objects if that was less than the number of processors, since
there isn't any point in starting up 4 threads if you only need one
object created at the beginning.
One potential problem there is that it also uses availableProcessors to
set the default maximum and minimum pool sizes, so running it on a T2000
using its defaults would mean that it would be keeping 64 objects in the
pool at minimum. On the other hand, if you're running it on a T2000
you're probably expecting it to be processing enough requests that 64
might be an appropriate number, I'm not sure.
>
> We'll likely see additional chip vendors introducing similar chip
> architectures. So, we might need to do some further "learning" of
> the runtime to initialize the pool.
I'm not sure how to get it to figure out that it actually only has
access to 16 concurrent threads when availableProcessors() returns 64,
though, since the entire idea is that it should know as little as
possible about the actual hardware that it is running on. Is there a
more accurate call that could be used?
>
> However, I really like the idea of dynamic / adaptive thread pool sizing.
In general, I find that making things more adaptive is almost always a
good idea, since intelligent adaption nearly always performs better than
static defaults. But, I'm an AI nerd, so that might be personal bias
talking.
--
Jacob Kessler
>
> charlie ...
>
> ... [other stuff zapped]...
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>