Another idea is to add a robust caching layer, where you don't even make a
request to the database if the data you're looking for is there. Of
course, this also depends on your app and how your requests are made.
On Mon, Feb 3, 2014 at 1:47 PM, Steven Siebert <smsiebe_at_gmail.com> wrote:
> I'm not sure what your web does...but perhaps a push model (ie websockets)
> could be used to reduce the number of GET requests...
>
> If you're doing a lot of polling to get updates, you could instead have a
> single background thread receiving updates and sending them out to all
> clients...instead of polling from the clients up. Again, depending on your
> app, it may even eliminate the need for a database request in typical cases
> (ie after commit, the broadcaster is directly notified and sends out the
> requests -- no need to read-after-write).
>
>
> On Mon, Feb 3, 2014 at 10:20 AM, Blake McBride <blake_at_arahant.com> wrote:
>
>> Kevin,
>>
>> Thank you very much for your response. By simultaneous users I meant
>> concurrent requests. I am aware of and implemented the system as you
>> suggested.
>>
>> I suppose my real question, which I had difficulty articulating was, is
>> there a way for me to simultaneous respond to more simultaneous requests
>> than I have allowable database connections. I now know the answer is no.
>> In a broader sense, I am wondering what I need to do
>> to maximize the number of simultaneous requests. I've figured out I
>> need to verify:
>>
>> Max database connections
>> Max database connection pool size
>> Max http connection threads
>>
>> Of course, I need the hardware to back it up.
>>
>> If anyone has anything to add to this list, I'd surely appreciate it!
>>
>> Thanks.
>>
>> Blake
>>
>>
>> On Sun, Feb 2, 2014 at 8:08 PM, Kevin Schmidt <kevin.schmidt_at_nextgate.com
>> > wrote:
>>
>>> Blake,
>>>
>>> When you say simultaneous users, do you mean all logged in to the
>>> application at the same time? Or that many concurrent requests to the
>>> application? I assume you mean all logged in to the application at the
>>> same time which means some subset of these would actually be issuing any
>>> type of request at the same time. Depending on how interactive the app is
>>> and how quickly users go from page to page, on average a given user may be
>>> causing requests to be made less than 10% of the time, or on average at
>>> most 30 users would be issuing requests simultaneously.
>>>
>>> Given this, In general, you don't want each user to have a database
>>> connection open during their entire session. This is wasteful in that it
>>> is holding connections open that aren't being used and also means that you
>>> have to ensure users log out or sessions time out in order to close
>>> connections and return them to the pool.
>>>
>>> What you want to do is architect your database access to be done in a
>>> way where a connection is opened (retrieved from the pool) when a query is
>>> about to be issued and used for that, but then closed (returned to the
>>> pool) as soon as it is no longer actively being used. By doing this, you
>>> can limit the required connections to perhaps 30 or less and set your pool
>>> size accordingly.
>>>
>>> Kevin
>>>
>>>
>>>
>>> From: Blake McBride <blake_at_arahant.com>
>>> Reply-To: "users_at_glassfish.java.net" <users_at_glassfish.java.net>
>>> Date: Sunday, February 2, 2014 5:37 PM
>>> To: "users_at_glassfish.java.net" <users_at_glassfish.java.net>
>>> Subject: Scaling up
>>>
>>> Greetings,
>>>
>>> I have a GWT application (written in Java) that I host on Glassfish that
>>> uses Hibernate & PostgreSQL all on a Linux box. I don't know much about
>>> scaling the application or deployment to support a lot of simultaneous
>>> users so I though I would reach out to the glassfish community. For now, I
>>> have everything running on a single machine. So, at this point I am only
>>> looking to support as many users on the single machine as possible. Just
>>> to give my understanding of scaling a number, let's say I want to support
>>> 300 simultaneous users.
>>>
>>> I understand that each connection spawns a new Java (native) thread.
>>> That shouldn't be a problem. 300 threads on top of the ones used by
>>> glassfish and the OS shouldn't be a problem. Am I wrong?
>>>
>>> What scares me is the database connections. My understanding is that
>>> each connection creates it's own database connection. I understand there
>>> is pooling of these going on but I doubt the pool will handle 300
>>> connections. I think this is my potential problem. Am I wrong? What
>>> possible solutions do I have?
>>>
>>> I think my _only_ potential issues are RAM, CPU horsepower, number of
>>> threads, and number of database connections. My opinion is that I should
>>> be okay regarding the RAM, CPU, and thread count. Is there anything else I
>>> should be worried about?
>>>
>>> Ultimately I may scale to more machines if needed, but that will be
>>> another matter.
>>>
>>> I really appreciate any help you can offer.
>>>
>>> Thanks.
>>>
>>> Blake McBride
>>>
>>>
>>
>