Oracle Berkeley DB Java Edition 11G R2 Change Log

Library Version, Release 5.0.73

Log File On-Disk Format Changes:

JE 5.0.73 has moved to on-disk file format 8.

The change is forward compatible in that JE files created with release 4.1 and earlier can be read when opened with JE 5.0.73. The change is not backward compatible in that files created with JE 5.0 cannot be read by earlier releases. Note that if an existing environment is opened read/write, a new log file is written by JE 5.0 and the environment can no longer be read by earlier releases.

There are two important notes about the file format change.

  1. The file format change enabled significant improvements in operation performance, memory and disk footprint, and concurrency of databases with duplicate keys. Due to these changes, an upgrade utility must be run before opening an environment with this release, if the environment was created using JE 4.1 or earlier. See the Upgrade Procedure below for more information.
  2. An application which uses JE replication may not upgrade directly from JE 4.0 to JE 5.0. Instead, the upgrade must be done from JE 4.0 to JE 4.1 and then to JE 5.0. Applications already at JE 4.1 are not affected. Upgrade guidance can be found in the new chapter, "Upgrading a JE Replication Group", in the "Getting Started with BDB JE High Availability" guide.

Upgrade Procedure

Due to the format changes in JE 5, a special utility program must be run for an environment created with JE 4.1 or earlier, prior to opening the environment with JE 5.0 or later. The utility program is part of JE 4.1. JE 4.1.20, or a later version of JE 4.1, must be used.

One of two utility programs must be used, which are available in the release package for JE 4.1.20, or a later release of JE 4.1. If you are currently running a release earlier than JE 4.1.20, then you must download the latest JE 4.1 release package in order to run these utilities.

The steps for upgrading are as follows.

  1. Stop the application using BDB JE.
  2. Run the DbPreUpgrade_4_1 or DbRepPreUpgrade_4_1 utility. If you are using a regular non-replicated Environment:
        java -jar je-4.1.20.jar DbPreUpgrade_4_1 -h <dir>
    If you are using a JE ReplicatedEnvironment:
        java -jar je-4.1.20.jar DbRepPreUpgrade_4_1
             -h <dir>
             -groupName <group name>
             -nodeName <node name>
             -nodeHostPort <host:port>
  3. Finally, start the application using the current JE 5.0 (or later) release of BDB JE.

The second step -- running the utility program -- does not perform data conversion. This step simply performs a special checkpoint to prepare the environment for upgrade. It should take no longer than an ordinary startup and shutdown.

During the last step -- when the application opens the JE environment using the current release (JE 5 or later) -- all databases configured for duplicates will automatically be converted before the Environment or ReplicatedEnvironment constructor returns. Note that a database might be explicitly configured for duplicates using DatabaseConfig.setSortedDuplicates(true), or implicitly configured for duplicates by using a DPL MANY_TO_XXX relationship (Relationship.MANY_TO_ONE or Relationship.MANY_TO_MANY).

The duplicate database conversion only rewrites internal nodes in the Btree, not leaf nodes. In a test with a 500 MB cache, conversion of a 10 million record data set (8 byte key and data) took between 1.5 and 6.5 minutes, depending on number of duplicates per key. The high end of this range is when 10 duplicates per key were used; the low end is with 1 million duplicates per key.

To make the duplicate database conversion predictable during deployment, users should measure the conversion time on a non-production system before upgrading a deployed system. When duplicates are converted, the Btree internal nodes are preloaded into the JE cache. A new configuration option, EnvironmentConfig.ENV_DUP_CONVERT_PRELOAD_ALL, can be set to false to optimize this process if the cache is not large enough to hold the internal nodes for all databases. For more information, see the javadoc for this property.

If an application has no databases configured for duplicates, then the last step simply opens the JE environment normally, and no data conversion is performed.

If the user fails to run the DbPreUpgrade_4_1 or DbRepPreUpgrade_4_1 utility program before opening an environment with JE 5 for the first time, an exception such as the following will normally be thrown by the Environment or ReplicatedEnvironment constructor: (JE 5.0.46) JE 4.1 duplicate DB
  entries were found in the recovery interval. Before upgrading to JE 5.0, the
  following utility must be run using JE 4.1 (4.1.20 or later):
  DbPreUpgrade_4_1.  See the change log.
  UNEXPECTED_STATE: Unexpected internal state, may have side effects.

If the user fails to run the DbPreUpgrade_4_1 or DbRepPreUpgrade_4_1 utility program, but no exception is thrown when the environment is opened with JE 5, this is probably because the application performed an Environment.sync before last closing the environment with JE 4.1 or earlier, and nothing else happened to be written (by the application or JE background threads) after the sync operation. In this case, running the upgrade utility is not necessary.

Changes in 5.0.73

  1. Fixed a bug that caused data corruption (for example, resulting in an EnvironmentFailureException with a LOG_FILE_NOT_FOUND message) under certain circumstances, when many Databases are created and not all Databases are kept open. In fact two bugs were fixed that contributed to the problem. The first bug was a memory size calculation error and caused heavy/costly cache eviction, even when the cache would normally be large enough to avoid such eviction. This calculation error occurred during log cleaning, and only when closed databases had been previously evicted from the cache. The second bug caused Btree corruption when cache eviction was heavy and many closed databases were evicted. [#21686] (JE 5.0.61)

  2. Fixed a bug that prevented CacheMode.EVICT_LN from operating correctly in a long running Environment where log cleaning is active. As log cleaning migrated records (specifically LNs), the memory used by Btree internal nodes (specifically BINs) increased. The symptom is that the number of BINs in the "no target" category (EnvironmentStats.getNNoTarget) is reduced over time, and the number of BINs in other categories (getNINSparseTarget) increases. In some applications this would cause unnecessary eviction and IO, and therefore decreased performance. [#21734] (JE 5.0.64)

  3. A JE/HA application may now explicitly transfer the master role, on demand, from the current master to one of a specified set of currently active replica nodes in the replication group. The transferMaster() methods in the ReplicatedEnvironment class are provided for this purpose. [#18081] (JE 5.0.64)

  4. It is now possible to restore an entire replication group by restoring just one node from a backup. The other nodes in the group perform a Network Restore from this one node, if that initial node is included in the list of "helper hosts" configured by the other nodes. [#20701] (JE 5.0.64)

  5. Fixed a bug which could lead to problems if the replication group master failed to receive transaction acknowledgements from a majority of existing replicas when making changes to the composition of the group (for example, adding new nodes, or moving an existing node to a new network address).

    In one case, the master would become unable to accept any future group changes, even after all existing replicas were successfully connected and providing acknowledgements again. In another case the normal protections against duplicate masters could become compromised, making it possible (although unlikely) for transactions apparently "durably replicated" to a group majority to nevertheless disappear after a network partition were repaired. [#21095] (JE 5.0.64)

  6. Improved validity checking for replication group name and node names. Both of these types of names should consist of letters, digits, and hyphen, underscore, or period. [#21407] (JE 5.0.64)

  7. Fixed a bug where specifying a very short ENV_UNKNOWN_STATE_TIMEOUT could lead to an UnknownMasterException from the ReplicatedEnvironment constructor. [#21427] (JE 5.0.64)

  8. Fixed a DiskOrderedCursor bug where latch deadlocks would occur when the consumer thread (the application thread calling DiskOrderedCursor.getNext) also accessed the database being scanned using other APIs, e.g., Database or Cursor methods. To allow a DiskOrderedCursor to operate concurrently with other operations on the same database in the consumer thread, the internal algorithm for a disk ordered scan has been changed. See the new Consistency Guarantees and Performance Considerations sections in the DiskOrderedCursor javadoc. Also as part of this fix, the following methods in DiskOrderedCursorConfig have been deprecated: getMaxSeedMillisecs, setMaxSeedMillisecs, getMaxSeedNodes and setMaxSeedNodes. These config properties no longer have any effect.

    Also fixed a bug that caused a null key to be returned by DiskOrderedCursor.getNext. The bug occurred infrequently as the result of concurrent log cleaning activity. When it occurred, calling DatabaseEntry.getData for the key parameter would return null, even though DiskOrderedCursor.getNext returned OperationStatus.SUCCESS.

    Thanks to Vinoth Chandar for reporting this on OTN, and helping us to diagnose the problem and test the fix.

    [#21667] (JE 5.0.65)

  9. Fixed a DPL class evolution bug that could result in an assertion with a stack trace similar to the following, when attempting to open an EntityStore.
    java.lang.AssertionError: <ComplexType id="0" ...  </ComplexType>
        at com.sleepycat.persist.impl.Evolver.evolveFormatInternal(
        at com.sleepycat.persist.impl.Evolver.evolveFormat(
        at com.sleepycat.persist.impl.ComplexFormat.evolveFieldList(
        at com.sleepycat.persist.impl.ComplexFormat.evolveAllFields(
        at com.sleepycat.persist.impl.ComplexFormat.evolve( 
    [#21869] (JE 5.0.66)

  10. The log cleaner now outputs "No file selected for cleaning" log messages at level FINE rather than at level INFO. (JE 5.0.66)

  11. Added safeguards to prevent log corruption when an Error (e.g., OutOfMemoryError) is thrown by a RandomAccessFile method during a write operation. Normally RandomAccessFile methods throw IOExceptions, but on certain platforms they may throw Errors. Without the added safeguards, data corruption could occur when an Error is thrown by RandomAccessFile methods. Several changes were made along with this fix to support stress testing of IO errors. [#21929] (JE 5.0.68)

  12. Removed an overzealous assertion that could fire during periods of heavy eviction with a stack trace such as the following:
    This (like any assertion) would invalidate the Environment, but otherwise did not cause any persistent damage. [#21990] (5.0.70)

  13. Added a new method that should be used for performing incremental backups: DbBackup.getLogFilesInSnapshot. Calling this method rather than File.list is necessary to prevent the possibility that the log cleaner will delete files prior to the creation of the backup manifest for the current snapshot. If File.list is called, as described previously in the DbBackup javadoc, the manifest may not contain the files necessary for a restore and this would invalidate the backup. For more information, see the updated description and examples at the top of the DbBackup class javadoc.

    The code examples in the DbBackup javadoc and the Getting Started Guide have also been updated to pass the lastFileInPrevBackup parameter to the DbBackup constructor rather than to the deprecated getLogFilesInBackupSet method.

    [#22014] (5.0.70)

  14. The JE log cleaner now logs a SEVERE level message when the average cleaner backlog increases over time. A trailing average of the cleaner backlog is used to prevent spurious messages. Some applications may wish to use SEVERE log messages, such as this one, to trigger alerts. An example of the message text is below.
    121215 13:48:57:480 SEVERE [...] Average cleaner backlog has grown from 0.0 to
    6.4. If the cleaner continues to be unable to make progress, the JE cache size
    and/or number of cleaner threads are probably too small. If this is not
    corrected, eventually all available disk space will be used.
    For more information on setting the cache size appropriately to avoid such problems, see: Why should the JE cache be large enough to hold the Btree internal nodes?

    [#21111] (5.0.71)

  15. Fixed a bug that could cause a NullPointerException, such as the one below, when a ReplicatedEnvironment is opened on an HA replica node. The conditions that caused the bug are: 1) a replica has been restarted after an abnormal shutdown (ReplicatedEnvironment.close was not called), 2) a transaction involving database records was in progress at the time of the abnormal shutdown, 3) the database is then removed (Environment.removeDatabase), and finally 4) yet another abnormal shutdown occurs. If this bug is encountered, it can be corrected by upgrading to the JE release containing this fix, and no data loss will occur.
    Exception in thread "main" (JE
    5.0.XX) ...  last LSN=.../... LOG_INTEGRITY: Log information is incorrect,
    problem is likely persistent. Environment is invalid and must be closed.
    Caused by: java.lang.NullPointerException
        ... 10 more
    [#22052] (5.0.71)

  16. The log cleaner will now delete files in the latter portion of the log, even when the application is not performing any write operations. Previously, in a ReplicatedEnvironment files were prohibited from being deleted in the portion of the log after the last application write. When a log cleaner backlog was present (for example, when the cache had been configured too small, relative to the data set size and write rate), this could cause the cleaner to operate continuously without being able to delete files or make forward progress.

    Note that this change only applies to replicated environments. In non-replicated environments, deletion of log files has always been allowed (and is still allowed) in the portion of the log after the last application write.

    [#21069] (5.0.71)

  17. Compiling and running the JE unit tests ("ant test") now require that JUnit version 4.10 or higher is installed. [#18115] (5.0.72)

  18. Added a new log cleaner statistic to help diagnose problems when record locks are held by the application and the locks prevent log file deletion after a log file has been cleaned: EnvironmentStats.getPendingLNQueueSize. [#22100] (5.0.72)

  19. Fixed a bug that caused two EnvironmentStats -- getNCachedBINs and getNCachedUpperINs -- to be incorrect when calling Environment.removeDatabase or truncateDatabase. When a database was removed/truncated, the number of BINs/INs belonging to that database were not subtracted from these two stats, even though they were evicted from the cache correctly. So in an application where databases are removed/truncated, these two stats were larger than the actual number of BINs/INs in cache and would increase over time.

    Also added a new cache statistic to help diagnose potential memory leaks: EnvironmentStats.getDataAdminBytes.

    [#22100] (5.0.72)

  20. Fixed a bug that caused a NullPointerException during log cleaning when a database is truncated or removed within a small processing window. An example of the stack trace follows.
    Feb 11, 2013 3:32:13 PM run
    SEVERE:  caught exception, java.lang.NullPointerException Continuing

  21. Fixed a bug that caused a NullPointerException when opening an Environment that was previously written with JE 4.1 or earlier, and databases were truncated or removed since the last full checkpoint. An example of the stack trace follows. This should not occur if the DbPreUpgrade_4_1 or DbRepPreUpgrade_4_1 is run as described under Upgrade Procedure above.
    Exception in thread "main" java.lang.NullPointerException

  22. Two changes to DbSpace were made. First, inaccuracies in the the -r (recalculation) option have been corrected; discrepancies with the actual utilization values were due to these inaccuracies. Second, the -s and -e options, and corresponding setStartFile and setEndFile methods, have been added; these allow viewing and (optionally) recalculating a subset of the files in the log. Note that the recalculation option is used primarily for analysis and debugging. [#22208]

  23. DbVerify has been improved to add Btree level information for internal databases, and internal database names may now be specified with the -s option. Note that DbVerify is used primarily for analysis and debugging. [#22209]

  24. Fixed a bug in EnvironmentConfig.setConfigParam where a boolean "true" property specified with leading or trailing whitespace was treated as if it were "false", and no exception was thrown. The same bug impacted boolean properties specified in the file with trailing whitespace.

    WARNING: This fix could change the behavior of JE if a boolean EnvironmentConfig property is specified as "true" but with leading or trailing whitespace. Previously such a property would be set to false, and now it will be set to true.


  25. Changes in 5.0.58

    1. The replay mechanism used in JE HA has been restructured to use multiple threads for increased concurrency. These changes result in a 20% increase in throughput in our performance tests. [#21396]

    2. Made several improvements and fixes to log cleaning.
      • Made changes to reduce bursty cleaning behavior. A burst of log cleaning would occur when selecting a log file for cleaning that has a significantly different average record size than the log files cleaned previously. The bursts impacted application performance.
      • EnvironmentStats.getCorrectedAvgLNSize and getEstimatedAvgLNSize are deprecated and have been replaced by getLNSizeCorrectionFactor.
      • Added INFO level logging that is output during log cleaning and helps to analyze cleaner behavior.
      • Fixed a problem that prevented the following EnvironmentStat values from being cleared (set to zero) when StatsConfig.setClear(true) was used: getNBINDeltasObsolete, getNBINDeltasCleaned, getNBINDeltasDead, getNBINDeltasMigrated.
      • Fixed a stats problem that incorrectly counted migrated LNs as marked for migration, even when lazy migration was disabled (which is the default setting). Migrated LNs were included in EnvironmentStats.getNLNsMarked and they should have been included in EnvironmentStats.getNLNsMigrated.
      • Fixed a calculation error that prevented log cleaning probes (read-only cleaner runs that detect under-cleaning scenarios) from being executed. The error occurred only when the log file size was very large, e.g., 1 GB.

    3. Fixed a bug that sets the transaction state incorrectly when InsufficientAcksException is thrown during Transaction.commit(). Due to this bug, the transaction state is set to Transaction.State.MUST_ABORT when this exception is thrown. The incorrect state has two impacts:
      • If Transaction.abort() is not called after InsufficientAcksException is thrown, the impact is only that Transaction.getState() will report the wrong value (MUST_ABORT). The transaction is committed on the master and will be committed on the replicas.
      • If Transaction.abort() is called after InsufficientAcksException is thrown, the impact depends on whether Java assertions are enabled. If assertions are enabled, abort() fires an assertion resulting in a stack trace such as the one below. The transaction is committed on the master and will be committed on the replicas.
            Caused by: java.lang.AssertionError
        If assertions are disabled and Transaction.abort() is called after InsufficientAcksException is thrown, the abort() will succeed (at least temporarily) on the master, but for a brief period (between the commit and the abort) may appear to other threads as committed. It will always be committed on the replicas. The abort() will be permanent on the master unless an abnormal shutdown and recovery occur and the transaction is replayed by recovery; in that case, the transaction will be committed on the master and the replicas.

      To summarize, if Transaction.abort() is called after InsufficientAcksException is thrown and assertions are disabled, the transaction may be aborted on one node (the master at the time of the commit and abort), but committed on the other nodes (the replicas at the time). Otherwise, the only impact is that the transaction state will be wrong (it will be MUST_ABORT); in this case the transaction will be committed on all nodes.

      The likelihood of the bug occurring in production is low because the InsufficientAcksException only occurs when a replica becomes unavailable in a small window. Normally, InsufficientReplicasException (which does not trigger this bug) is thrown when a replica is unavailable.

      This bug was introduced in JE 5.0.48 and only applies to replicated environments. With the fix, the transaction state is correctly set to COMMITTED and the transaction will be committed on all nodes.

      A workaround for this bug (an alternative to upgrading JE) is to be sure not to call abort() after commit(), at least when InsufficientAcksException is thrown. Calling abort() after commit() fails can make exception handling simpler, but is never necessary.


    Changes in 5.0.55

    1. Fixed a bug that could cause the contents of a database to be deleted after a crash, when the database was renamed just prior to the crash. A database can be renamed explicitly with Environment.renameDatabase, or implicitly when a DPL Renamer mutation is used to rename an entity class or a secondary key field. This bug was introduced in JE 5.0.48. [#21537]

    2. Fixed an inefficiency in the Checkpointer which manifests itself as a large amount of time being spent in calling java.util.HashMap$HashIterator.remove() calling java.util.HashMap.removeEntryForKey(). [#21492]

    3. An enhancement has been made to DbVerify to show a histogram of BIN counts by utilization percentage.This is a sample of the output:
      Verifying database p10
      Checking tree for p10
      BTree: Composition of btree, types and counts of nodes.
              binEntriesHistogram=[40-49%: 3,093; 50-59%: 10; 60-69%: 22; 70-79%: 26;
      80-89%: 21; 90-99%: 35]
              binsByLevel=[level 1: count=3,207]
              insByLevel=[level 2: count=48; level 3: count=1]

    4. Fixed a bug that caused an exception such as the following, when opening an Environment in read-only mode. This occurred under certain circumstances when a clean shutdown (final checkpoint) was not performed when the Environment was last used by a read-write process. (JE 5.0.34) Cannot log LNs in read-only env. UNEXPECTED_STATE: Unexpected internal state, may have side effects.
      Note that the exception occurs before the Environment constructor returns. [#21493]

    5. Fixed a bug in JoinCursor that causes records to be incorrectly passed over (not returned) when the cursors are configured for read-uncommitted isolation mode, and a non-null data parameter is passed to JoinCursor.getNext. Thanks to Arthur Brack for reporting this, debugging it, and telling us how to fix it! [#21501]

    6. Made a minor performance improvement to avoid notifying the checkpointer that a write has occurred, when the checkpointer is already running. This wasted overhead was sometimes noticeable in the internal method. [#21106]

    7. Fixed a bug where a replication master could get an InsufficientAcksException even if it had been configured as Designated Primary, if it happened to lose the connection to the replica in the middle of waiting for the commit acknowledgement. [#21536]

    Changes in 5.0.48

    1. Fixed a bug that caused an EnvironmentFailureException with LOG_FILE_NOT_FOUND, for example: Environment invalid because of previous exception:
      (JE 5.0.34) fetchTarget of 0xc3d4/0x25dab parent IN=2372138 IN
      lastFullVersion=0xc4ca/0xd2aad lastLoggedVersion=0xc4ca/0xd2aad parent.getDirty()=false state=0
      LOG_FILE_NOT_FOUND: Log file missing, log is likely invalid.
      Environment is invalid and must be closed.
      The bug occurred after deleting a range of keys in a deferred-write database (DatabaseConfig.setDeferredWrite(true) or StoreConfig.setDeferredWrite(true)), closing the Environment, and then opening the Environment. It is most likely to occur when deleting large numbers of consecutive keys, and when opening and closing the Environment often, with the log cleaner enabled.

      Because a log file is incorrectly deleted as a result of this bug, restoring from a backup is necessary, i.e., there is no way to repair the environment. Many thanks to Vishal Vishnoi and Vladimir Egorov for reproducing the problem and helping us to diagnose it.


    2. Fixed a bug that caused an EnvironmentFailureException with LOG_FILE_NOT_FOUND, for example: Environment invalid because of previous exception:
      (JE 5.0.34) fetchTarget of 0x1ced/0x3d3561b parent IN=116781 IN
      lastFullVersion=0xbc40/0x2ca3f3a lastLoggedVersion=0xbcc3/0x6a37712 parent.getDirty()=false state=0
      LOG_FILE_NOT_FOUND: Log file missing, log is likely invalid.
      Environment is invalid and must be closed.
      The bug occurred when a preload (Database.preload or Environment.preload) was performed concurrently with other access to the database, including log cleaning. For example, the bug could occur as a result of the following sequence:
      • The Environment is opened with cleaner threads enabled, which is the default setting.
      • The JE cleaner threads immediately start working and processing entries for Database X. This is very likely to occur if the CLEANER_MIN_UTILIZATION setting has recently been increased, but can also occur without changing settings.
      • After opening the Environment the application calls Database.preload or Environment.preload to preload Database X.

      Because a log file is incorrectly deleted as a result of this bug, restoring from a backup is necessary, i.e., there is no way to repair the environment. We are in Diego's debt for not only reporting this bug on OTN, but for reproducing it repeatedly for us over an extended period of debugging, and finally testing the fix.


    3. Fixed a bug that could cause Btree corruption for databases with the following two characteristics:
      • A key comparator must be configured. A comparator is configured by calling DatabaseConfig.setBtreeComparator or DatabaseConfig.setDuplicateComparator, or by implementing the Comparable interface in a DPL key class.
      • Key prefixing must be enabled. Key prefixing is enabled by calling DatabaseConfig.setKeyPrefixing(true) and is enabled by default for all databases configured for duplicates. A database is configured for duplicates explicitly using DatabaseConfig.setSortedDuplicates(true), or implicitly using a DPL MANY_TO_XXX relationship (Relationship.MANY_TO_ONE or Relationship.MANY_TO_MANY).
      The corruption causes corrupted keys, and could manifest itself in a number of ways. For example, the comparator could throw an exception because the key is invalid, an operation could fail unexpectedly because the key is not found, internal operations such as log cleaning could fail, etc.

      Thanks to Lee Saenz and the other folks at UnboundId for reporting this problem and reproducing it repeatedly for us during debugging.


    4. The following change log entry applies to the JE 4.1 product, but impacts the upgrade process from JE 4.1 to JE 5.0.

      Fixed a bug in the DbPreUpgrade_4_1 and DbRepPreUpgrade_4_1 utilities that prevented them from working properly due to concurrent log cleaning activity while the utility is running. The symptom of the bug is that, after running the utility without errors, when then opening the Environment with JE 5, the following exception would occur: (JE 5.0.XX) Before upgrading to
        JE 5.0, the following utility must be run using JE 4.1: DbPreUpgrade_4_1
        using the JE old version. See the release notes.
      WARNING: Due to this bug, JE 4.1.20 or later should be used to run the DbPreUpgrade_4_1 and DbRepPreUpgrade_4_1 utilities. In addition, these utilities should be run for all environments, not just those with duplicates databases. See "Upgrade Procedure" above for information on running these utilities.

      Thanks to Vinoth for reporting the problem and helping us with the diagnosis and testing. [#21304]

    5. Fixed a bug that could cause unnecessary log cleaning when upgrading to JE 5. It could also cause an exception after upgrading such as the one below, when a Btree comparator is configured in a duplicates database:
      Also fixed a bug that could result in an exception similar to the above:
      This problem was originally reported on OTN. [#21405]

    6. Fixed a bug that could cause Environment.removeDatabase or truncateDatabase to loop in a "livelock" pattern along with active log cleaner threads, where the removeDatabase or truncateDatabase thread is not making forward progress. Also fix a bug that could cause this state to eventually cause an EnvironmentFailureException during a subsequent recovery, and prevent opening the environment. [#20816]

    7. Fixed a bug that could cause a call to Environment.checkpoint or Environment.close to hang with the following stack trace. The can only happen in a replicated environment, and the most realistic situation that could provoke this is a call to Environment.close() while there are ongoing, concurrent write operations.
      at java.util.concurrent.CountDownLatch.await(

    8. It is now possible to specify the receive buffer size to be used TCP connections used by JE HA:
      • The parameter controls the buffer size of the TCP connection used to communicate changes between a replica and its master.
      • The setReceiveBufferSize(int) and getReceiveBufferSize() methods of the class are used when copying log files between replication nodes for Network Restore.

      The new default value associated with TCP connections is 1MB and provides good performance on most platforms. The 1MB default value represents a change in behavior from previous versions of JE where the buffer size was defaulted to that provided by the underlying OS. To continue using the old behavior specify a value of zero for the buffer size. Please consult the javadoc for further details. [#21002]

    9. Fix a DPL bug that prevented upgrading to JE 5.0 from an earlier release. The bug applies only when a stored entity, written with an older release, is present and meets the following qualifications:
      • The entity has a non-key field declared as type Object but containing a String, and
      • this non-key field contains the same object (same String instance) as the primary key field.
      When such an entity was read with JE 5.0, an AssertionError was thrown, for example:
      at com.sleepycat.persist.impl.RecordInput.readObject(
      at com.sleepycat.persist.impl.ReflectionAccessor$
      at com.sleepycat.persist.impl.ReflectionAccessor.readNonKeyFields(
      at com.sleepycat.persist.impl.ComplexFormat$EvolveReader.readObject(
      at com.sleepycat.persist.impl.PersistEntityBinding.readEntity(
      at com.sleepycat.persist.impl.PersistEntityBinding.entryToObjectInternal(
      at com.sleepycat.persist.impl.PersistEntityBinding.entryToObject(
      at com.sleepycat.persist.EntityValueAdapter.entryToValue(

    10. Add additional fixes to prevent looping and long operations times for Environment.removeDatabase and truncateDatabase (see [#20816] above). The additional fixes address long operation times due to log cleaning with large file sizes in combination with eviction. Also fix a bug that slows down log cleaning (contributing to the long operation times as well) when large numbers of databases are present, and not all databases are kept open by the application. [#21015]

    11. Fix a bug that caused a NullPointerException such as the following (or a similar exception) during periods of heavy concurrent eviction.

    12. In general BDB JE is not sensitive to the JVM default encoding and all text -- both in stored data and in network protocols -- is encoded as UTF-8. However, several exceptions to this rule have been discovered and are addressed in this patch release.
      • For HA apps, host and node names in the replication network protocol are currently encoded using the JVM default encoding. A new configuration parameter (ReplicationConfig.PROTOCOL_OLD_STRING_ENCODING) can be set to false to use UTF-8 encoding instead. In the next JE release this parameter will be set to false by default, and we strongly recommend that all impacted applications explicitly set it to false before then. See the parameter javadoc for more information, including a definition of which applications are impacted and restrictions on performing a hot upgrade.
      • The DbPrintLog utility has been changed to assume UTF-8 encoding of keys when the "-k text" option is used. Previously it assumed the default encoding. Since key-values are normally portable, UTF-8 is a better assumption.
      • When DbDump is used in aggressive salvage mode (-R), a bug was fixed where database names were assumed to be stored internally in default encoding, when actually they are stored in UTF-8 encoding. If the default encoding and UTF-8 do not match, for the characters in the database name, then DbDump would fail.
      • Several issues were fixed in unit tests that causes failures when run with a JVM default encoding that is not a superset of ASCII.

    13. Fixed a bug where JE internal threads throw UnsupportedOperationException on a JVM where CPU time measurement is not supported by the implementation. For example, this is not supported in the IBM JVM for z/OS. [#20967]

    14. Fixed a bug where the following exception could be thrown during heavy concurrent write and read operations. This did not cause data loss and did not invalidate the Environment instance.
      (JE 5.0.34)
      UNEXPECTED_EXCEPTION: Unexpected internal Exception, may have side effects.
      Thanks to OTN user user591209 for reporting this and testing the fix. [#21121]

    15. The configuration parameters: ReplicationConfig.REPLAY_MAX_OPEN_DB_HANDLES and ReplicationConfig.REPLAY_DB_HANDLE_TIMEOUT have been deprecated. Please use the mutable configuration parameters: ReplicationMutableConfig.REPLAY_MAX_OPEN_DB_HANDLE and ReplicationMutableConfig.REPLAY_DB_HANDLE_TIMEOUT instead. This change also fixes a bug which resulted in the values associated with these parameters being used incorrectly. [#21144]

    16. Fix a DPL bytecode generation issue related to Java 1.7 by upgrading the DPL bytecode generator to use ASM 4.0. Previously, if the DPL were compiled for the Java 1.7 bytecode target, an exception such as the following would occur when running the application:
      Exception in thread "main" java.lang.VerifyError:
      Expecting a stackmap frame at branch target 115 in method
      Lcom/sleepycat/persist/impl/Format;Ljava/util/Set;)V at offset 33
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(
      at com.sleepycat.persist.model.EntityModel.classForName(
      at com.sleepycat.persist.model.AnnotationModel.getClassMetadata(
      at com.sleepycat.persist.impl.PersistCatalog.addProxiedClass(
      at com.sleepycat.persist.impl.PersistCatalog.init(
      at com.sleepycat.persist.impl.PersistCatalog.(
      at com.sleepycat.persist.impl.Store.(
      at com.sleepycat.persist.EntityStore.(
      This issue did not normally arise when using a downloaded JE jar file, because the JE library is compiled with a Java 1.5 bytecode target. It would arise, however, if JE were explicitly compiled with a Java 1.7 bytecode target, or with Java 1.7 and no specified target. A workaround for this issue was to specify -XX:-UseSplitVerifier when running Java; this is no longer necessary. [#20586]

    17. Fixed a bug that sometimes caused a ConcurrentModificationException when calling Environment.setMutableConfig, when exception listeners were previously configured. [#21177]

    18. Fixed a bug that caused an erroneous SecondaryIntegrityException to be thrown, or an OperationStatus.KEYEXIST to be returned, when using a JoinCursor with READ_UNCOMMITTED isolation. The erroneous exception or return value occurred when performing reads using the JoinCursor concurrently with updates or deletions of the same records. [#21258]

    19. Change Database.close and Transaction.abort so they fail silently when called after already being closed/aborted/committed, and the Environment is invalid due to an earlier EnvironmentFailureException. Previously, an EnvironmentFailureException was thrown under these conditions. The new behavior is consistent with Cursor.close and avoids unnecessary exception handling. [#21264]

    20. Fix a bug that sometimes caused a ConcurrentModificationException when calling Environment.close, while a Transaction is beginning or ending. For example:
          at java.util.HashMap$HashIterator.nextEntry(
          at java.util.HashMap$
      Although never reported, a similar problem could occur if Databases are open. [#21279]

    21. Fixed a bug that caused DiskOrderedCursor.getNext to hang when it is called after OperationStatus.NOTFOUND was returned previously. [#21282]

    22. Added additional replication statistics to These stats can be retrieved by the new methods:
      1. ReplicatedEnvironmentStats.getTotalTxnMs()
      2. getNMaxReplicaLag()
      3. getNMaxReplicaLagName()
      4. getNTxnsAcked()getAckWaitMs()
      5. getNTxnsNotAcked()
      6. getTotalTxnMs()
      7. getAckWaitMs()
      The javadoc for these methods contains the descriptions associated with these new statistics.

    23. Fixed several issues with XAEnvironment.end.
      • Previously an XAException was thrown if the transaction was not suspended. This is no longer the case.
      • XAException is thrown when TMSUSPEND is passed and the transaction is already suspended.
      • XAException is thrown if the Xid is unknown.
      • XAException is thrown if TMSUCCESS, TMFAIL or TMSUSPEND is not passed.

    24. Added the Transaction.getState() method and Transaction.State enumeration to formalize the different states of a transaction, and in particular to indicate whether a transaction commit is in an undetermined state (Transaction.State.POSSIBLY_COMMITTED) when a failure occurs during the commit process. Also added an internal pre-flight check, just before making a commit durable, to narrow the time window where the Transaction.State.POSSIBLY_COMMITTED state can occur. See the new javadoc for more details. [#21264]

    25. Exposed an environment configuration parameter that was previously undocumented: EnvironmentConfig.EVICTOR_CRITICAL_PERCENTAGE. This parameter can be set to improve operation latency under certain conditions. See the javadoc for more information.

      Also improved eviction to avoid acquiring a per-database exclusive latch for each Btree node that is evicted. This appeared in thread dumps as follows:

      "JEEvictor" daemon prio=3 tid=0x000000000a28e000 nid=0x197 waiting on condition [0xfffffc7fcd8b8000]
         java.lang.Thread.State: WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)
          - parking to wait for  <0xfffffc701242b048> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
          at java.util.concurrent.locks.LockSupport.park(
          at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(
          at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(
          at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(
          at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
          at java.util.concurrent.ThreadPoolExecutor$
      Thanks to Jerome Arnou for diagnosing this issue. [#21106]

    26. Fixed a latch deadlock that occurs under rare conditions when assertions are enabled. The partial stack trace below shows where the latch is taken incorrectly:
      Thanks to Arthur Brack for reporting this issue. [#21395]

    27. Log all RuntimeExceptions (not only DatabaseExceptions) when the Environment.cleanLog method is called. Previously, RuntimeExceptions were logged when the built-in cleaner thread was used, but not when cleanLog was called directly. Also ensure that when a RuntimeException is thrown by a user's key or duplicate comparator, the exception does not prevent latches from being properly released. [#21328]

    Changes in 5.0.34

    New Features:

    1. A new class, DiskOrderedCursor has been added which lets an application iterate over records in a Database in unsorted order, in order to improve retrieval speed. This can be useful when the application needs to scan all records in a database, and will be applying filtering logic which does not need key ordered retrieval. The cursor optimizes the iteration by walking in Log Sequence Number (LSN) order rather than key order. LSN order approximates disk sector order, and retrieving in disk order reduces I/O cost.

      A DiskOrderedCursor can be obtained via Database.openCursor(DiskOrderedCursorConfig). Note that creating an instance of the DiskOrderedCursor disables the file deletion done by log cleaning until the close() method has been called. See the javadoc for DiskOrderedCursor for more detailed information. [#15260]

    2. A new Environment.preload(Databases[], PreloadConfig) method has been added which permits preloading multiple databases via a single method call rather than multiple calls to Database.preload(). Preload is implemented to optimize I/O cost by fetching the records of a Database by disk order, so that disk access is are sequential rather than random. Using the multi-database Environment.preload() lets the preload operation batch the records for all of the target Databases so that multiple scans over the log are not necessary.

      A progress mechanism has also been added which lets a caller of the preload() method receive feedback on whether progress is being made. See and

      Two new configurations are available to bound the amount of memory used by preload processing, at the expense of preload performance. can be used to direct preload to partition its work into batches., In addition, can be used to limit the amount of memory outside of the JE cache used by preload. [#15260] [#18153] [#19306]

    3. The JE environment can now be spread across multiple subdirectories. Environment subdirectories may be used to spread an environment's *.jdb files over multiple directories, and therefore over multiple disks or file systems. Environment subdirectories reside in the environment home directory and are named data001/ through dataNNN/, consecutively, where NNN is the value of je.log.nDataDirectories. A typical configuration would be to have each of the dataNNN/ names be symbolic links to actual directories which each reside on separate file systems or disks.

      Environment subdirectories are enabled through the je.log.nDataDirectories environment parameter. If 0 (the default), all log files (*.jdb) will reside in the environment home directory passed to the Environment constructor. A non zero value indicates the number of environment subdirectories to use for holding the environment's log files.

      If data subdirectories are used (i.e. je.log.nDataDirectories > 0), this parameter must be set when the environment is initially created. Like the environment home directory, each and every one of the dataNNN/ subdirectories must also be present and writable. This parameter must be set to the same value for all subsequent openings of the environment or an exception will be thrown.

      If the set of existing dataNNN/ subdirectories is not equivalent to the set { 1 ... je.log.nDataDirectories } when the environment is opened, an EnvironmentFailureException will be thrown, and the Environment will fail to be opened.

      DbBackup.getLogFilesInBackupSet() now returns the subdirectory name and file separator prepended to the file name if je.log.nDataDirectories > 0. [#19125]

    4. A new class, lets the HA application add more application specific information to the notion of node state in a replication group. It's meant to support more application specific semantics when assessing the availability of a given node member. [#18046]

    5. New options have been added for changing the host of a JE replication node, and for moving a JE replication group. See the Utilities section.

    6. Applications may now specify a custom java.util.logging.Handler per Environment. Logging messages generated by JE will go to this handler. See EnvironmentConfig.setLoggingHandler() and the memo Using JE trace logging. [#19110]

    7. Replicated nodes can now be opened in UNKNOWN state, to support read only operations in a replicated system when a master is not available. Prior to JE 5, replicated nodes could only be opened in MASTER or REPLICA state, and if a master could not be elected, the node could not be opened. This is enabled through the new configuration parameter: ReplicationConfig.ENV_UNKNOWN_STATE_TIMEOUT. Please review its javadoc for further details. [#19338]

    8. The master node in a replication group now rebroadcasts election results on a periodic basis, to help restore normal functioning of a replication group after a network partition has been resolved. The default period is a minute and can be modified by the new ReplicationConfig parameter ReplicationConfig.ELECTIONS_REBROADCAST_PERIOD. Please review its javadoc for further details. [#20220]

    9. A new listener mechanism is available to give the application feedback about potentially long running activities such as environment startup (recovery), replication stream syncup, and database preload. See and its related enums,, and [#20043]

    10. New methods were added to allow quickly skipping over a specified number of key/value pairs using a cursor. For details, see the javadoc for Cursor.skipNext and Cursor.skipPrev. [#19165]

    11. A per-Environment ClassLoader may now be configured and will be used by JE for loading all user-supplied classes, including btree comparators, duplicate comparators, class instances serialized by SerialBinding, and DPL persistent classes. This is useful when separate ClassLoaders are used for the JE jar file and the application's classes, for example, when running under an application server framework. The ClassLoader is configured using EnvironmentConfig.setClassLoader. Related changes are:
      • The new com.sleepycat.util.ClassResolver class defines and implements the class loading policy.
      • If a btree or duplicated comparator needs to be initialized before it is used or needs access to the environment's ClassLoader property, it may implement the new interface.
      • The com.sleepycat.bind.serial.Catalog interface has a new method, getClassLoader, that is used to supply the ClassLoader to the SerialBinding. This method is implemented by the StoredClassCatalog class, and returns the environment's ClassLoader property. As a result, SerialBinding.getClassLoader now returns the environment's ClassLoader property.
      • The com.sleepycat.persist.model.EntityModel.classForName static method has been deprecated in favor of the new EntityModel.resolveClass method, which honors the environment's ClassLoader property.

    12. The interface is now implemented by all JE classes and interfaces with a public void close() method. This allows using these objects with the Java 1.7 try-with-resources statement, for applications compiled and run with Java 1.7 or later. See AutoCloseable, which is a superinterface of in Java 1.7.

      The following JE classes and interfaces now implement Closeable, and on Java 1.7 AutoCloseable.


    13. The Environment.flushLog method has been added. It can be used to make durable, by writing to the log, all preceding non-transactional write operations, as well as any preceding transactions that were committed with no-sync durability. To flush buffered data for durability reasons, with the addition of this method it is no longer necessary to perform a checkpoint, call Environment.sync, or commit a transaction (with sync or write-no-sync durability). [#19111]
    14. Fix a bug where the wrong stack trace was sometimes output for an owner or waiter thread, when using EnvironmentConfig.TXN_DEADLOCK_STACK_TRACE for debugging.

    API Changes:

    1. Made the EnvironmentConfig and ReplicationConfig classes Serializable. [#19241]

    2. The je.env.fairLatches environment parameter has been deprecated and no longer has any effect.

    3. The behavior, although not the syntax or intent, of EnvironmentConfig.CHECKPOINTER_BYTES_INTERVAL has changed. Previously, this interval defined the byte distance between the end of one checkpoint and the start of the next. Now it defines the byte distance between the start of one checkpoint and start of the next. In other words, now the interval includes the checkpoint itself, which in some cases can be large. This now more accurately reflects the intention of the parameter, which is to bound the recovery interval, which is proportional to the time to recover (open the Environment) after a crash. It does mean, however, that checkpoints may occur more often for the same configured interval, and some applications may wish to adjust their configured setting accordingly. [#19704]

    4. Cursor.getSearchBothRange for a non-duplicates database has been corrected to behave exactly as Cursor.getSearchBoth. Previously getSearchBothRange returned a data item that was greater or equal to the data search parameter, which was incorrect. Now it only returns a data item that is equal to the data search parameter. [#19165]

    5. See the description of the new ProgressListener class in the New Features section above.

    6. The default value for EnvironmentConfig.CLEANER_LAZY_MIGRATION has been changed from true to false. Over several releases the benefits of setting this to true have decreased and are now less than the benefits of setting it to false. See the javadoc for this parameter for details. [#20588]

    Performance and other General Changes

    1. Performance of record update and deletion operations has been significantly improved when the record is not in the JE cache and the application does not need to read the record prior to performing the update or deletion. Previously, the old version of the record was always read into cache, if not already present, by the update or deletion operation. If the record was not already in cache, this often resulted in an expensive random I/O. Now, because of internal changes to record locking, records are not read by update or deletion operations, and this can significantly reduce random I/O for delete or update-heavy workloads. An exception to this rule is when a record is updated or deleted in a primary database that has associated secondary indices. In this case, the primary record must be read in order to update the secondary indices.

      As a result of this change, the log cleaner must now sometimes estimate the size of records that are made obsolete by updates and deletions, and must sometimes "probe" a log file to determine record sizes. Several statistics have been added to show this activity in the class:

      • getCorrectedAvgLNSize
      • getEstimatedAvgLNSize
      • getNCleanerProbeRuns
      The DbSpace utility now also prints the first two of these new statistics.


    2. Made an internal format change for databases with duplicate keys that improves operation performance, reduces memory and disk overhead, and increases concurrency. The format change requires that databases configured for duplicates (including DPL secondary keys with a MANY_TO_XXX relationship) created with JE 4.1 or earlier must be converted to the new duplicates format. The conversion is done automatically when first opening the environment with JE 5.0. The conversion is described in the Upgrade Procedure section at the top of this page.

      Internal Format Change

      This information is included for users who understand Btree internals and wish to know what changed internally.

      • Previously, for each unique key in a database with duplicates, a separate Btree for that key was used to store the duplicates for that key. In addition, a leaf node (record) per key was stored to hold the duplicate count. The separate Btree was found to add unnecessary memory and disk overhead, and the maintenance of the duplicate count was found to decrease concurrency. The new format does not use a separate Btree per key, and instead uses an internal two-part key. A duplicate count per key is no longer stored or maintained.
      • The first part of the new internal two-part key is the user key and the second part is the user data. JE translates between the internal key and the user key and data, and adapts the get, put, etc, API operations to work with new internal key format. To reduce memory overhead, key prefixing is always enabled for databases with duplicates. Note that in a database with duplicates, the internal data size is always zero because the user data is stored as the second part of the internal key.

      Performance Improvements

      • The size of the reduction in memory and disk overhead depends on how many duplicates for each key are present. The improvement is largest when the number of duplicates per key is small. For example, in a data set of 10 million records, 10 duplicates per key, and 8 byte key and 8 byte data lengths, the total memory and disk usage are reduced by approximately 40%. Operation performance is improved as a side effect of this reduced overhead. For larger numbers of duplicates per key the improvement will be smaller, and vice-versa.
      • The increase in concurrency applies when records are inserted and deleted. Previously, when one transaction performed an insertion or deletion of a duplicate for a given key, until this transaction ended no other transaction could read or write to any other duplicate for that key. Now, other transactions are prevented only from accessing the specific duplicate that was inserted or deleted, according to the isolation mode in use. The degree of increased concurrency depends on how many transactions/threads access duplicates for the same key. In general the impact will be larger when there are more duplicates per key.
      • Operation performance is also improved during reads, when the cache size is not large enough to hold all LNs or the cache is cold. Previously, the leaf node for each duplicate was read from disk when not already in cache. Now, leaf nodes are not read from disk during a normal read operation and therefore random I/O is reduced. For the data set described above (10 million records, 10 duplicates per key, 8 byte key and data), loaded in random key order, when starting with a cold cache the duration of a full scan in key order was improved by approximately 800%. This is a worst case test scenario for the old duplicate format, and the improvement will be smaller in tests with different parameters.

      Other Behavioral Changes

      • Without key prefixing, databases with duplicates would store keys inefficiently. Therefore, key prefixing is now mandatory and automatically enabled for all databases with duplicates. When duplicates are configured, the application does not have to call DatabaseConfig.setKeyPrefixing(true). If DatabaseConfig.setKeyPrefixing(false) is called for a database with duplicates configured, an IllegalStateException is thrown.
      • With this change, determining the number of duplicates per key (Cursor.count) is more costly than it was previously. Previously, the count was stored and could be returned by reading a single record for the key. Now, to determine the count precisely JE must traverse internal Btree nodes to count the duplicates for the key. If you are using Cursor.count, consider using the new method Cursor.countEstimate instead. Cursor.countEstimate returns a rough estimate of the count using a fixed cost algorithm. Since Cursor.count is primarily intended for use in query optimizations, the Cursor.countEstimate method may be a good substitute. For example, JoinCursor now uses Cursor.countEstimate rather than Cursor.count to determine the index processing order. Likewise, EntityCursor.countEstimate is a potential substitute for EntityCursor.count.
      • Btree partial comparators may now be used with databases configured for duplicates. See DatabaseConfig.setBtreeComparator for information on partial comparators. Previously, Btree partial comparators could be use only with non-duplicate databases.
      • The following methods are deprecated and no longer have any effect, or in the case of getter methods, now always return zero or null.
        • DatabaseConfig.setNodeMaxDupTreeEntries
        • DatabaseConfig.getNodeMaxDupTreeEntries
        • BtreeStats.getDuplicateBottomInternalNodeCount
        • BtreeStats.getDupCountLeafNodeCount
        • BtreeStats.getDuplicateInternalNodeCount
        • BtreeStats.getDuplicateTreeMaxDepth
        • BtreeStats.getDINsByLevel
        • BtreeStats.getDBINsByLevel
        • PreloadStats.getNDINsLoaded
        • PreloadStats.getNDBINsLoaded
        • PreloadStats.getNDupCountLNsLoaded

    3. An improvement has been made that requires significantly less writing per checkpoint, less writing during eviction, and less metadata overhead in the JE on-disk log files.

      Previously, delta log entries for bottom internal nodes, called BINDeltas, were written by checkpoints rather than writing full BINs, and full BINs were written less frequently (see EnvironmentConfig.TREE_MAX_DELTA and TREE_BIN_DELTA ). This is still the case. However, the approach taken previously required that the same delta information be repeatedly written at each checkpoint, even it had not changed since the last checkpoint. Now, delta information is only written when it changes.

      By significantly reducing writing, the new approach provides overall performance improvements. However, there is also an additional cost to the new approach: When a BIN is not in cache, fetching the BIN now often requires two random reads instead of just one; one read to fetch the BINDelta and another to fetch the last full BIN. For applications where all active BINs fit in cache, this adds to the I/O cost of initially populating the cache. For applications where active BINs do not fit in cache, this adds to the per-operation cost of fetching a record (an LN) when its parent BIN is not in cache. In our tests, the lower write rate more than compensates for the additional I/O of fetching the BINDelta, but the benefit is greatest when all BINs fit in cache.


    4. Improvements were made to recovery (Environment open) performance by changing the behavior of checkpoints in certain cases. Recovery should always be very quick after the following types of checkpoints:
      • When CheckpointConfig.setMinimizeRecoveryTime(true) is used along with an explicit checkpoint performed by calling the Environment.checkpoint method.
      • When Environment.sync is called.
      • When Environment.close is called, since it performs a final checkpoint.
      In addition, a problem was fixed where periodic checkpoints (performed by the checkpointer thread or by calling Environment.checkpoint) would cause long recovery times under certain circumstances.

      As a part of this work, the actions invoked by ReplicatedEnvironment.shutdownGroup() were streamlined to use the setMinimizeRecoveryTime() option and to reduce spurious timeouts during the shutdown processing. [#19559]

    5. Node fanouts (see DatabaseConfig.setNodeMaxEntriess) are now mutable and persistent database attributes. They were previously permitted to mutate, but the changed attribute value wasn't saved persistently, so the new value might sometimes revert to the previously existing setting. This has been fixed. In addition, the javadoc for DatabaseConfig has been expanded to clarify what attributes are mutable vs. fixed, persistent vs temporary. [#18262]

    6. The internal Database ID field has been enlarged from a 32-bit (int) quantity to a 64-bit (long) quantity. A Database ID value is assigned from a single sequence in each Environment for each Database created and truncated. This change therefore increases the total number of Databases that can be created and the number of truncate operations that can be performed over the lifetime of an Environment. Note that one bit of the ID is used to distinguish replicated from local databases, so the total number of Databases and truncate operations per Environment is now effectively 2^63. [#18540]

    7. Fixed a bug that replicated parameters in can't be recognized if opening a read only standalone Environment on a replicated Environment home. [#19080]

    8. Added Implementation-Title, Implementation-Version, Implementation-Vendor, Implementation-URL entries to the je.jar MANIFEST.MF file. [#19320]

    9. Added a check to make sure that a log write buffer is never larger than the size of a log file. [#19324]

    10. Fix a problem that caused high CPU utilization during log cleaning, as in the following stack trace.
      "Cleaner-5" daemon prio=10 tid=0x00002aaae8008800 nid=0xaeb runnable [0x0000000042ac9000]
      java.lang.Thread.State: RUNNABLE
      locked <0x00002aaab0652408> (a

    11. Added support for truncating and removing a single database in the same transaction. Previously, if this was attempted the Environment.removeDatabase method this would throw an exception, and subsequently the application would have to abort the transaction. Now, this is allowed. [#19636]

    12. Added EnvironmentConfig.TREE_COMPACT_MAX_KEY_LENGTH for user configuration of the in-memory compaction of keys in the Btree. Previously, in-memory keys were compacted but the key size threshold was fixed at 16 bytes. Now the key size threshold is configurable and has a 16 byte default value. For more information, see the EnvironmentConfig.TREE_COMPACT_MAX_KEY_LENGTH javadoc. [#20120]

    13. Fixed a problem where a Database handle kept a reference to the environment after it was closed. Added the following warning to the close() method javadoc for all JE handles.
          WARNING: To guard against memory leaks, the application should discard
          all references to the closed handle.  While BDB makes an effort to discard
          references from closed objects to the allocated memory for an environment,
          this behavior is not guaranteed.  The safe course of action for an
          application is to discard all references to closed BDB objects.

    14. The JE HA Monitor, is now more proactive about discovering group status changes that occur while it is has no network connectivity. The monitor could miss replication group changes if it was down, or isolated due to a network partition. In JE 4.1, the Monitor would not receive that information, and would not be able to alert the application. In JE 5, the Monitor periodically and proactively checks replication group status in order to update its own notion of group status, and will send the appropriate notifications to the application.

    15. Fixed a problem arising from a network partition event, that would result in an unnecessary rollback of committed transactions. This problem typically manifests itself in the application receiving a RollbackProhibitedException if the number of transactions exceeds je.rep.txnRollbackLimit. The following describes a scenario leading to the problem:

      1. Consider a three node replication group: A, B, C with A as the master and nodes: B, C serving as replicas.
      2. A network partition isolates A from B and C, resulting in the partitions: (A) and (B,C).
      3. B and C hold and election and B becomes the master.
      4. There are now two masters: the pre-existing A which is isolated and cannot perform durable writes, and a newly elected B.
      5. The majority side (B,C) is accessible and continues to make progress performing durable writes.
      6. The master on the majority side B goes down. There is now no master on the (B,C) side since there is no quorum.
      7. Some time later, the partition is healed.
      8. C now sees A as an established master and syncs with it potentially rolling back committed transactions as a result.

      The fix changes the final step above so that an election is held before the rollback is allowed to proceed. The election results in C being elected the new master. Node A encounters a MasterReplicaTransitionException which it must handle by closing and re-opening its environment handle so it can resume operations as a replica. [#20258] [#20572]

    16. Fix a bug where a temporary database record was unnecessarily locked during log cleaning, and this also caused an assertion to fire in FileProcessor.processFoundLN. This only occurs when temporary databases are used and Environment.CLEANER_LAZY_MIGRATION is set to false, which is now the default in JE 5. [#20670]

    17. Fixed a bug seen in replicated environments which would manifest as the following stack trace. This was a transient problem and there was no corruption to the persistent data, but the application would be required to close and reopen the ReplicatedEnvironment instance.
      (JE 5.0.30) node2(2):/tmp/scaleDir2/env Couldn't find bucket for GTE VLSN 299,617,391 in database.
       EndBucket = 
        tracker = first=298,966,283 last=299,617,477 sync=299,617,458
               txnEnd=299,617,458  firstTracked=-1 lastOnDiskVLSN=299,617,477
      UNEXPECTED_STATE_FATAL: Unexpected internal state, unable to continue.
      Environment is invalid and must be closed.

    18. Fixed a bug in the processing of internal BDBJE metadata which would result in the following stack trace when a replicated environment was re-opened:
      java.lang.IndexOutOfBoundsException: Index: 110, Size: 110
          at java.util.ArrayList.RangeCheck(
          at java.util.ArrayList.get(
      There was no corruption to the persistent data, but without this fix, the environment would repeatedly fail to open. [#20796]

    19. Fix a bug where key prefixing (DatabaseConfig.setKeyPrefixing) was not effective when keys are inserted in sequential order, for example, during a bulk load in key order. Without any updates, deletions or non-sequential insertions, the prefix information was not persistent and therefore not used after closing and re-opening the Environment. In addition, during sequential insertion more cache space than necessary was used under certain circumstances. These problems did not occur with non-sequential insertion. [#20799]

    20. Improved DbCacheSize utility to take into account memory management enhancements and improve accuracy. Added support for key prefixing (-keyprefix), databases configured for sorted duplicates (-duplicates), and replicated environments (-replicated). Environment config params and replication config params may also be specified, since they impact memory usage as well.

      The old -density argument has been replaced by -orderedinsertion. The old -overhead argument has been removed, and the utility prints the minimum environment memory overhead.

      See the DbCacheSize javadoc for more information. It now includes a discussion of how cache memory is used and how it can be reduced using environment and database configuration options. [#20145]

    21. Fix a bug where an endless loop occurs when calling a SecondaryCursor method that moves the cursor to an existing record. This occurs when both of the following conditions are true:

      1. The cursor operation is performed with READ_UNCOMMITTED isolation, via use of a LockMode or CursorConfig with this setting.
      2. The secondary index has been corrupted, for example, by not using transactions to update the primary or secondary.

      Thanks to user9057793 for reporting this on OTN. [#20822]

    22. Fix a problem where methods that return the database count (Database.count, EntityIndex.count, StoredMap.size, etc) would sometimes an incorrect value due to concurrent JE background activity (log cleaning or IN compression, for example). The API contract for the count methods is that they return a correct value as long as there is no application write activity. [#20798]

    Direct Persistence Layer (DPL), Collections and Bind packages

    1. New bindings for sorted, or naturally ordered, packed integers are now available. The new bindings allow using packed integers in record keys, and provide natural sort order without a custom comparator. The API additions are:
      • com.sleepycat.bind.tuple.SortedPackedIntegerBinding -- new class
      • com.sleepycat.bind.tuple.SortedPackedLongBinding -- new class
      • com.sleepycat.bind.tuple.TupleInput -- new methods: readSortedPackedInt, getSortedPackedIntByteLength, readSortedPackedLong, getSortedPackedLongByteLength.
      • com.sleepycat.bind.tuple.TupleOutput -- new methods: writeSortedPackedInt, writeSortedPackedLong.
      • com.sleepycat.util.PackedInteger -- new methods: readSortedInt, readSortedLong, getReadSortedIntLength, getReadSortedLongLength, writeSortedInt, writeSortedLong, getWriteSortedIntLength, getWriteSortedLongLength.

      See the com.sleepycat.bind.tuple package description for an overview of the new bindings and a comparative description of all tuple bindings. [#18379]

    2. Two bindings for the java.math.BigDecimal data type are now available.
      • The sorted format allows using BigDecimal values in record keys, and provides natural sort order without a custom comparator. The API additions are:
        • com.sleepycat.bind.tuple.SortedBigDecimalBinding -- new class
        • com.sleepycat.bind.tuple.TupleInput -- new methods: readSortedBigDecimal, getSortedBigDecimalByteLength.
        • com.sleepycat.bind.tuple.TupleOutput -- new methods: writeSortedBigDecimal, getSortedBigDecimalMaxByteLength.
      • The unsorted BigDecimal format does not provide natural sort order and is not intended for use in record keys, but has two advantages over the sorted format: trailing zeros after the decimal place are preserved, and a more compact, faster serialization format is used. The API additions are:
        • com.sleepycat.bind.tuple.BigDecimalBinding -- new class
        • com.sleepycat.bind.tuple.TupleInput -- new methods: readBigDecimal, getBigDecimalByteLength.
        • com.sleepycat.bind.tuple.TupleOutput -- new methods: writeBigDecimal, getBigDecimalMaxByteLength.

      See the com.sleepycat.bind.tuple package description for an overview of the new bindings and a comparative description of all tuple bindings. [#18379]

    3. java.math.BigDecimal is now defined as a built-in DPL simple data type. This means that BigDecimal values may be stored in DPL persistent objects, and fields of type BigDecimal may be defined as primary or secondary keys. The sorted BigDecimal format is always used in the DPL, and this provides natural sort order without a custom comparator. However, trailing zeros after the decimal place are stripped, meaning that precision is not preserved.

      If the application has previously defined a PersistentProxy for BigDecimal, special considerations are necessary when upgrading to this release:

      • The call to EntityModel.registerClass for the BigDecimal proxy class must be removed. If it is not removed, an IllegalArgumentException will be thrown by the EntityStore constructor. In general, proxies for built-in simple types are not allowed.
      • Even though the BigDecimal proxy class is not registered, the proxy class must be available for reading BigDecimal values that were written via the proxy, prior to this release. Rewriting (updating) an entity will convert the proxied BigDecimal value to the new built-in BigDecimal format. To convert all entities explicitly and efficiently, the EntityStore.evolve method may be used. After converting all entities using the proxied values, if you additionally wish to remove the proxy class itself then you must supply a Deleter mutation for the proxy class.
      • To use such a field, that was previously stored via a BigDecimal proxy class, as a secondary key, the application must first explicitly evolve the index using EntityStore.evolve, or update all values. Then the @SecondaryKey annotation may be added to create a new secondary index.


    4. Thanks to a contribution from Dave Brosius, a built-in proxy has been added for the java.util.LinkedHashMap. LinkedHashMap may now be used as the type of any persistent field. [#20055]

    5. Fixed a problem with MANY_TO_ONE and MANY_TO_MANY secondary index ordering when a Comparable key class is used for the primary key.

      In general, for MANY_TO_ONE and MANY_TO_MANY secondary indexes, more than one entity may have the same secondary key. When iterating over entities in secondary key order, the ordering of entities is by primary key within secondary key. For example, if Employee entities with an integer ID primary key are iterated using a String department secondary key, the iteration order is by integer ID within (grouped by) department.

      In prior releases, when a Comparable key class is used for the primary key, the Comparable defines the order of entities in the primary index. However, the Comparable.compareTo method is not used to determine primary key ordering within secondary key. Instead, the natural order of the primary keys is used. In our example, if the primary key class implements Comparable to order entities in decreasing integer order, the iteration order is incorrectly in natural integer order (increasing) within department. This has been fixed and the order for newly created secondary indexes now uses the Comparable, and in our example the iteration order is now decreasing integer order within department.

      However, because the ordering of an existing index cannot be changed, the old ordering will apply for secondary indexes created prior to this release. To cause the correct ordering to be used for an existing index, you must delete the database for the secondary index. The next time the index is opened (via EntityStore.getSecondaryIndex) the index will be regenerated in the correct order. To delete the index database, first determine the database name; see the Database Names section of the EntityStore javadoc. Then, before opening the EntityStore, delete the index database using Environment.removeDatabase. [#17252]

    6. Fix a bug where if a null value was stored using the Collections API, it could not be read. An exception such as the following would occur:
      at com.sleepycat.bind.serial.SerialBinding.entryToObject(
      at com.sleepycat.collections.DataView.makeValue(
      at com.sleepycat.collections.DataCursor.getCurrentValue(
      at com.sleepycat.collections.StoredContainer.getValue(
      at com.sleepycat.collections.StoredMap.get(
      Now null values can be stored and read, as long as the value binding supports null values. Note that null values are not supported when entity bindings are used, such as when using the DPL. Thanks to 'annie' on OTN for reporting the problem. [#18633]

    7. A field in a DPL entity may now refer to its enclosing entity object. Previously, an IllegalArgumentException was thrown in this situation. Note that references to other entities (not the enclosing object) are not permitted. Thanks to Trevor (tkram01) on OTN for reporting the problem. [#17525]

    8. Fix a bug where adding new secondary keys into an abstract entity class caused an error. An exception such as the following would occur: java.lang.InstantiationException UNEXPECTED_EXCEPTION:
      Unexpected internal Exception, may have side effects.
      at com.sleepycat.compat.DbCompat.unexpectedException(
      at com.sleepycat.persist.impl.ReflectionAccessor.newInstance(
      at com.sleepycat.persist.impl.ComplexFormat.checkNewSecKeyInitializer(
      at com.sleepycat.persist.impl.ComplexFormat.initialize(
      at com.sleepycat.persist.impl.Format.initializeIfNeeded(
      at com.sleepycat.persist.impl.PersistCatalog.init(
      at com.sleepycat.persist.impl.PersistCatalog.(
      at com.sleepycat.persist.impl.Store.(
      at com.sleepycat.persist.EntityStore.(
      Now adding new secondary keys into abstract entity classes is allowed in the DPL. Thanks to user 786189 on OTN for reporting the problem. [#19358]

    9. An IllegalStateException is now thrown when calling EntityStore.setSequenceConfig and the sequence has already been opened via a call to EntityStore.getPrimaryIndex. This is the behavior previously specified in the javadoc for setSequenceConfig. Thanks to patriciaG on OTN for reporting the problem. [#19356]

    10. An exception will be thrown when storing an enum class with constant-specific methods by DPL:
      java.lang.IllegalArgumentException: Class could not be loaded or is not persistent.
      Now storing an enum class with constant-specific methods is allowed in the DPL. Thanks to Mikhail Barg on OTN for reporting the problem. [#18357]

    11. Allows to register enum or array types by EntityModel.registerClass. This new feature will be useful when enum or array classes are unknown for DPL but will be used in converter mutation.

      Also collects the related formats for the Map or Collection types in the FieldInfo.collectRelatedFormats, e.g., creates an EnumFormat for MyEnum class when meeting Map<String, MyEnum>.

      Thanks to James on OTN for reporting the problem. [#19377]

    12. The performance of serializing/deserializing String data in DPL has been improved by treating String type as a primitive type. Treating String type as a primitive type in DPL does not need to stored the format ID, and directly uses TupleInput.readString/TupleOutput.writeString to serialize/deserialize String data.

      In our benchmark, there are 500,000 records containing only String fields, and the size of each record is 180 bytes. With such changes, the reading performance gains 14% improvement, and the writing performance gains 10% improvement. [#19247]

    13. PrimaryIndex.get operation has been improved by avoiding deserialize the known primary key when deserializing an entity. Instead, the known primary key is directly set to the primary key field by the accessor.

      In our benchmark, there are 500,000 records. The primary key in each record is a composite key containing an integer field and a String field, and the size of each record is 180 bytes. With such change, the performance of PrimaryIndex.get operation (reads 500,000 records, uses pre-load mode to avoid I/O process) gains nearly 10% improvement.

      The improvement will be more significant if there is a large and complex primary key and small data. [#19248]

    Utility Changes:

    1. A fix has been made for a bug which allowed an Environment to be closed while a DbBackup was in progress. This could cause a checksum exception (but not data loss). An EnvironmentFailureException is now thrown by Environment.close() if the last open Environment handle is closed between calls to DbBackup.startBackup() and DbBackup.endBackup(). [19207]

    2. The DbGroupAdmin utility and ReplicationGroupAdmin class now provide a new updateAddress() method which lets the user change the hostname and port of a member of a JE replication group. This new functionality is useful when a node must be moved to a new host. [#18632]

    3. A new utility,, has been added to reset the members of a replication group, replacing the group with a new group consisting of a single new member. This utility is useful when a copy of an existing replicated environment needs to be used at a different site, with the same data, but with a different initial node that can be used to grow the replication group as usual, such as may be the case when an application is moved from a staging to a production environment. The utility can also be used to change the group name associated with the environment. [#19886]

    Documentation, Installation and Integration:

    1. A new upgrade how-to section for replicated applications, "Upgrading a JE Replication Group," is now available in the Administration chapter (Chapter 7) of the "Getting Started with JE High Availability" guide.

    2. Several additions to the backup process were made to the DbBackup javadoc.
      1. A checkpoint should be performed before calling startBackup, to reduce recovery time after a restore.
      2. Log files should be verified before being copied to the backup set.
      3. A list of the current files in the environment should be obtained and used to avoid unused files after a restore.
      4. Incremental backups are now documented in more detail.
      See the DbBackup javadoc for details. [#19894]

    3. Added javadoc in several places recommending the use of compressed oops, as well as a warning that it will not be honored by JE (and JE cache memory will be wasted) unless it is specified explicitly on the Java command line. For example, see EnvironmentConfig.setCacheSize.