Minoru Nitta wrote:
> Salut,
>
>
> I have a question about OutboundConnectionCacheBlockingImpl.
> I think it uses too many synchronized methods, and it may cause
> performance problem consequently.
>
> I have not enough knowledge about this class (too complicated),
> but it seems to me that most of synchronized blocks are not needed,
> if internally used two HashMaps change to ConcurrentHashMap.
>
I originally wrote a non-blocking version, but it's not currently in
Grizzly.
I'm not sure what the status of that is: I wrote the connection cache, and
Aleksey integrated it into Grizzly.
The blocking version has the advantage of being simpler, and more likely
correct.
The non-blocking version is more complicated, and much less tested at this
point.
Please also remember that it's not enough to look at some code and say,
I think this should be faster. First you need a test that proves that
the code
is too slow. Only then should the optimization be considered.
However, I did anticipate that such optimization might be needed, so I tried
to design the ConnectionCaches in such a way that it could be added later.
The main place where I expect that improvement will be required (but again,
you need to measure it) is in get, on the call to cinfo.createConnection()
in tryNewConnection.
This call may block for a significant time, and during that time no other
calls are permitted in the blocking implementation. Fixing this can be done
by using a state machine without going to a non-blocking implementation.
> I also do not undertand the way ConnectionCacheBlockingBase,
> a super class of OutboundConnectionCacheBlockingImpl, uses
> synchronized blocks. It uses synchronized because it must serialize
> access to totalBusy and totalIdle variables. Though,
> OutboundConnectionCacheBlockingImpl changes both variables
> directly, because they are protected variables. What is the
> intended code here?
>
>
There was also a ConnectionCacheNonBlockingBase that used AtomicInteger
for totalBusy and totalIdle, which was used in the non-blocking case.
If you want to see those versions, you can find a slightly different version
(different packaging at least) in the GlassFish v3 CORBA workspace at:
http://kenai.com/projects/gf-corba-v3-mirror
You'll need Mercurial (hg) to access the workspace at URL:
https://kenai.com/hg/gf-corba-v3-mirror~staging
You'll find the CORBA versions in the directory
orblib/src/share/classes/com/sun/corba/se/impl/orbutil/transport
The ConcurrentQueue implementation is in the same general area (follow the
package names). I think LMSQueue is borrowed more or less from some similar
code in the JDK, but I can't remember exactly at this point.
Lock-free algorithms are deeply tricky, and I have not worked on this
code at
all recently (it's been 2.5 years, I think). The main problem is when
you need
to update two references or more concurrently. One is easy to handle
with compare and
swap trickery, but 2 makes the code extremely complicated, and 3 or more
quickly turns into a research problem (look for lock-free implementations of
balanced trees, and you'll quickly find out how complicated things can get).
Ken.