users@glassfish.java.net

Re: Limiting ammount of Data JPA pulls from Database

From: <glassfish_at_javadesktop.org>
Date: Fri, 09 Nov 2007 23:26:23 PST

Yea, this is a kind of "Doc, it hurts when I..." thing.

Simply put, don't suck in 1M rows in to the heap, or get more heap.

There's basically no reason for (most) applications to load, and cache, that many rows in one gulp.

The only way JPA would do that is if you:

a) Specifically told it to: select o from MyTable o, wher MyTable has a gazillion rows, or

b) You have a parent object with a OneToMany relationship to a table with a huge amount of rows, and then you access the parents list triggering the JPA to load in the entire collection.

If a) is the case, don't do that. Bring in chunks of data, either through careful filtering, or, as mentioned, the setMaxResults and setFirstResults.

If b) is the case, them, well, don't do that either. If you have that kind of structure, have a JPA configured to lazily load such a huge collection is like running with scissors. You will trip sometime, and you will fall.

So, don't have JPA manage that relationship, but handle it yourself.

Through careful use of the Value List Pattern and some clever caching, you can easily manage tabels with millions of rows, without obliterating your memory. But slurping all of the rows in to RAM really isn't the best option here.
[Message sent by forum member 'whartung' (whartung)]

http://forums.java.net/jive/thread.jspa?messageID=244863