Salut !,
The nature of io, with tcp and its packet resend logic etc, can make even
small messages take long time on a slow connections.
with mobile clients today, this problem is even more amplified.
Therefore there is a need to do the actual message sending concurrently, up
to a configurable limit,( lower then the saturation the of server pipe ).
I have solved it in my private notificationhandler:
public void notify(final CometEvent cometEvent, final
Iterator<CometHandler> iteratorHandlers) {
if (pipeline == null) {
notify0(cometEvent, iteratorHandlers);
} else {
while (iteratorHandlers.hasNext()){
addToPipeline(cometEvent, iteratorHandlers.next());
}
}
}
private void addToPipeline(final CometEvent cometEvent, final
CometHandler cometHandler){
pipeline.execute(new Runnable() {
public void run(){
notify0(cometEvent, cometHandler);
}});
}
public void setBlockingNotification(final boolean blockingNotification)
{
if (pipeline == null){
if (!blockingNotification){
pipeline = new ThreadPoolExecutor( 64, 64,
0L, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(32768, false));
}
}else{
if (blockingNotification){
pipeline.shutdown(); // can take long time
pipeline = null;
}
}
}
Here i use a 64 threads and max queue of 32k, Runnable instead of the
heavier grizzly Task object.
several K messages to several K users translates into millions of objects,
and suddenly its of interest to keep the overhead down.
perhaps we should offer the same functionality as standard or at least as an
configurable option so people dont
have to code their own notificationhandler.
I can implement it with the grizzly standard pipe and its Task inorder to
fit for normal usage,
that Task will give some minor overhead , its a more heavyweight object
then an empty Runnable, but thats the only compatible option ?.
regards
gustav trede