dev@jsr311.java.net

Re: JSR311: Response isn't adequate

From: Bill Burke <bburke_at_redhat.com>
Date: Tue, 11 Mar 2008 12:09:57 -0400

Jerome Louvel wrote:
> Hi Bill,
>
>> And I'm saying that your model buys you nothing but the overhead of
>> context switching.
>
> I just want to exploit the potential of NIO. In some cases, users prefer to
> gain scalability, limiting the number of threads, even if it increases the
> latency (due to context switching). I don't say that this is desirable in
> all situations.
>
>> The email thread you are referring to above cannot be
>> implemented in the current JAX-RS model.
>
> Could you explain why? I think it is feasible.
>

There's no support for Comet-like callbacks? Especially for input. For
output, there's no way to chunk fairly.


>> In your scenarios above, no performance benefit
>> can be realized unless the application developer is aware of
>> the issue and programming specifically towards it.
>
> I think it can be supported if the entity is returned as an InputStream for
> example.
>

Yeah, I saw marc's example and you're right. Modeling the entity as an
InputStream would allow output chunking. Thanks. Cool! :) But there's
still the problem of input chunking, no?


>> For slow input, you have to
>> read in chunks and continually return control back to the container.
>
> True.
>
>> For slow output, you need to chunk fairly between your output
>> requests, otherwise you have a different starvation problem
>> with high load/slow networks.
>
> True.
>
>> Are you familiar with Erlang and the YAWS benchmark? They can solve
>> this chunking problem easily, seemlessly, and without developer
>> knowledge because they use green threads on top of OS ones. Java
>> doesn't have this luxury.
>
> I'm not very familiar with Erlang but Java can also have support for green
> threads. BEA's JRockit support something close called thin threads.
> http://dev2dev.bea.com/pub/a/2004/02/jrockit.html
>

Read up on the YAWS bench. Its interesting to read the analysis from
both the proponents and opponents of the bench.

> You still have the call stack memory cost with several threads that you
> can't save.
>

I just have a feeling (hope) that the OS's and VMs will advance enough
where blocking I/O and thread-per-request will just scale massively.
Especially with 64-bit machines and ever increasing ram maybe it just
doesn't matter?


>> Even Jetty Continuations force you to program
>> in a certain way with their horrible use of exception
>> throwing to change program control flow.
>
> I'm looking forward to see what solution Servlet 3.0 EG comes up with, but I
> think that our approach Restlet based on our Representation class is not
> horrible at all :)
>

I thought Jetty Continuations threw an exception on suspend? That's
what I thought was horrible. Using exceptions to do control flow.


>> There is no imposition being done. If an OutputStream is
>> injected, then
>> the request input and output is handled by one thread. If the app
>> developer doesn't inject an OutputStream, it doesn't have to.
>
> The problem is that you ask the end-user to understand this complex problem
> in order to make a conscious decision. It forces containers to support both
> models, significantly complexifying the HTTP/IO stack, and prevents them
> from uniformly managing threads, connections, sockets as it now depends on
> whether a developer decided to use the outputstream. And how would the
> container know that in advance? I maintain that this imposes the
> thread-per-request model.
>

How would the container know in advance? This information is available
at deployment time. If a method returns void and OutputStream is
injected, then the container knows that the request thread is handling
output.

-- 
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com