On Fri, Jul 12, 2013 at 10:48:24AM -0700, Robert DiFalco wrote:
> The clients will be mobile apps, first one will be IOS. There will be
> millions of requests to the servers per day. Just from a scaling out
> perspective it seems better to not leave clients connected to the server
> during long running (> 1sec) operations. Those sockets are resources too,
> not just the threads that process them. Are you thinking that is not an
> issue? I didn't think the C10K problem was just a number of threads issue.
> And I've always heard it is bad practice to have a REST request take longer
> than a 500-1000 milliseconds.
C10K is about 10,000 *active* connections at once, not 10,000 *idle* ones.
HTTP/1.1 keeps connections in pools per-server to avoid the overhead of
creating more TCP connections and putting each one across the Internet into
the initial window size state which is very slow.
You are right about the length of REST requests, but if the socket is made
free for other uses, because the call did not block, then most smart HTTP
clients will pool it and reuse it later for more transactions with the same
server. The normal HTTP/1.1 socket timeout is 30-60 sec. before it gets rids
of a connection if the server allows it to stay open.
> Unless I'm missing something it seems like the AsyncResponse approach does
> not scale very well. I have to keep the connection in tact for the length
> of the operation AND I need to have some thread either blocked or polling
> to know when to resume the response. So with REDIS that would mean a thread
> either blocked on BLPOP waiting for a response from the worker process or a
> thread polling to see if the result was ready. No?
I think makes the most sense to send back a try-again HTTP code like you did.
Then the new poll could come later, on the same socket or a different socket
from the browser's pool with no ill effect. If AsyncResponse doesn't allow
this then it should probably be improved or another kind of Async ought to be
added to allow the right behavior.
Matthew.