users@glassfish.java.net

Re: Glassfish 2.1 Solaris and /dev/poll

From: Oleksiy Stashok <Oleksiy.Stashok_at_Sun.COM>
Date: Tue, 14 Sep 2010 17:27:53 +0200

Hi,

this is expected behavior, cause Glassfish/Grizzly uses so called
temporary Selector reads/writes, to simulate blocking I/O over non-
blocking channels.
We create a pool of temporary Selectors, which is equal to thread pool
max size.

So I guess the number of /dev/poll handles you see is equal to thread
pool max size + actual NIO acceptor Selectors. This value should not
grow and will be released on GF shutdown.

WBR,
Alexey.

PS: thanks to NIO folks for clarification.

On Sep 13, 2010, at 12:33 , glassfish_at_javadesktop.org wrote:
> While tuning a new webapp from our developers I discovered something
> strange:
>
> running a pfiles (lsof equivalent) on glassfish's 2.1 process, on a
> Solaris 10 Sparc with our apps deployed in it, show 264 files handle
> to /dev/poll
>
> 432: S_IFCHR mode:0000 dev:325,4 ino:8687 uid:0 gid:0 rdev:138,134
> O_RDWR|O_LARGEFILE
> /dev/poll
>
> This sounds really strange/buggy /dev/poll (man -s 7d poll) is the
> optimized version of poll() and should be called one time with a
> struct of files/socket to monitor or say one time per pool and not
> one time per socket.
>
> I've asked to developers and they are not using NIO at all in this
> apps.
>
> I could imagine it comes from the tuning we applied to glassfish for
> performance tuning but what I see sounds bad to me.
> Does it come from Glassfish and is it "normal" ?
>
> Thanks.
> [Message sent by forum member 'akhenakh']
>
> http://forums.java.net/jive/thread.jspa?messageID=482504
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe_at_glassfish.dev.java.net
> For additional commands, e-mail: users-help_at_glassfish.dev.java.net
>