I will add this to the meeting agenda.
I also think a general topic that covers this should be added to the 2.0
list.
To (try to) answer your questions ... you probably know, Solaris has
supported POSIX thread APIs since the early 2.x version of Solaris some
10 years ago.
Since NPTL requires a 2.6 Linux kernel, it may be that the JVM does not
currently support NTPL due to backward compatibility? It may also
detect the Linux kernel version at runtime and apply a given thread
model? I'm gonna check with some folks on the HotSpot runtime team.
Anything newer than, including Solaris 8 uses a 1 to 1 Solaris threads
to Java threads model. In Solaris 8 you have to explicitly set
LD_LIBRARY_PATH=/usr/lib/lwp:$LD_LIBRARY_PATH to get the 1 to 1 model.
But, it is the default in Solaris 9, Solaris 10 and OpenSolaris. So,
since Solaris 8, there has been POSIX threads API support in Solaris and
a 1-to-1 OS threads to Java threads mapping.
Rather than supporting the java.net.Socket approach, I would like to
explore the idea of using (in effect) a temporary NIO Selector per
connection along with a non-blocking SocketChannel. You & I have talked
about this a couple times in the past. ;-) This approach would allow
reading as much data as there is available per read, or per write which
could minimize the number read / write system calls in cases where PDUs
are small, or where more than one PDU can be read at a time. In
contrast, the traditional java.net.Socket approach will block and wait
until it receives the number bytes you expect to read. And, typically
the number of bytes you expect to read is equal to the size of the PDU.
So, for small PDUs, you may do more system calls to read versus the
other approach.
There's a couple potential things in this approach of using NPTL +
standard IO may be exploiting. We all know NIO is as close to bare
metal as it can get, i.e. there's plenty of opportunity to do things
inefficiently. We know that use NIO effectively you need to minimize
the number of read() / write() system calls, minimize underlying system
calls associated with the Selector and minimize thread context
switching. So, the use of NIO, thread pool and its interaction with
the thread pool can make a big difference.
There's also the possibility that testing was done exclusively on
Linux. So, we don't have any information on how Solaris compares, with
NIO versus without.
I wish we could see the implementation, or at least a design of the NPTL
& standard IO program implemented by the blogger. We know what we
observed when comparing a ported async web on top of Grizzly to MINA showed.
I think there is quite a few open and unanswered questions.
charlie ...
Ken Cavanaugh wrote:
> Let's put this on the agenda too. We still use the
> thread-per-connection model
> for the SocketFactory case in CORBA: should we again consider
> supporting this in
> Grizzly?
> Does anyone know how Linux + NPTL compares with Solaris threads, and
> especially
> how this is used in the JVM?
>
> Thanks,
>
> Ken.
>
>
> -------- Original Message --------
> Subject: [Fwd: NIO vs NPTL+standard IO]
> Date: Tue, 19 Feb 2008 11:50:59 -0500
> From: Jeanfrancois Arcand <Jeanfrancois.Arcand_at_Sun.COM>
> Reply-To: dev_at_grizzly.dev.java.net
> To: dev_at_grizzly.dev.java.net
>
>
>
> -------- Original Message --------
> Subject: NIO vs NPTL+standard IO
> Date: Tue, 19 Feb 2008 14:07:47 +0100
> From: Stefano Bagnara <apache_at_bago.org>
> Reply-To: dev_at_mina.apache.org
> To: dev_at_mina.apache.org
>
> Today I found another blog post on the "usual" topic:
> http://mailinator.blogspot.com/2008/02/kill-myth-please-nio-is-not-faster-than.html
>
> Is this FUD or a new trend that MINA should take care of?
>
> Is there any plan to support standard IO as a transport for MINA based
> applications?
>
> It would be useful to understand how a MINA based application could
> "switch back" to multithreaded standard IO and result in better
> throughput on modern JVM/OS.
>
> Thank you,
> Stefano
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe_at_grizzly.dev.java.net
> For additional commands, e-mail: dev-help_at_grizzly.dev.java.net
>
>