Oracle Berkeley DB Java Edition 11G R2 Change Log

Library Version, Release 5.0.34

Log File On-Disk Format Changes:

JE 5.0.34 has moved to on-disk file format 8.

The change is forward compatible in that JE files created with release 4.1 and earlier can be read when opened with JE 5.0.34. The change is not backward compatible in that files created with JE 5.0 cannot be read by earlier releases. Note that if an existing environment is opened read/write, a new log file is written by JE 5.0 and the environment can no longer be read by earlier releases.

There are two important notes about the file format change.

  1. The file format change enabled significant improvements in the operation performance, memory and disk footprint, and concurrency of databases with duplicate keys. Environments which contain databases with duplicate keys must run an upgrade utility before opening an environment with this release. See the Performance section for more information.
  2. An application which uses JE replication may not upgrade directly from JE 4.0 to JE 5.0. Instead, the upgrade must be done from JE 4.0 to JE 4.1 and then to JE 5.0. Applications already at JE 4.1 are not affected. Upgrade guidance can be found in the new chapter, "Upgrading a JE Replication Group", in the "Getting Started with BDB JE High Availability" guide.

Changes in 5.0.34

New Features:

  1. A new class, DiskOrderedCursor has been added which lets an application iterate over records in a Database in unsorted order, in order to improve retrieval speed. This can be useful when the application needs to scan all records in a database, and will be applying filtering logic which does not need key ordered retrieval. The cursor optimizes the iteration by walking in Log Sequence Number (LSN) order rather than key order. LSN order approximates disk sector order, and retrieving in disk order reduces I/O cost.

    A DiskOrderedCursor can be obtained via Database.openCursor(DiskOrderedCursorConfig). Note that creating an instance of the DiskOrderedCursor disables the file deletion done by log cleaning until the close() method has been called. See the javadoc for DiskOrderedCursor for more detailed information. [#15260]

  2. A new Environment.preload(Databases[], PreloadConfig) method has been added which permits preloading multiple databases via a single method call rather than multiple calls to Database.preload(). Preload is implemented to optimize I/O cost by fetching the records of a Database by disk order, so that disk access is are sequential rather than random. Using the multi-database Environment.preload() lets the preload operation batch the records for all of the target Databases so that multiple scans over the log are not necessary.

    A progress mechanism has also been added which lets a caller of the preload() method receive feedback on whether progress is being made. See and

    Two new configurations are available to bound the amount of memory used by preload processing, at the expense of preload performance. can be used to direct preload to partition its work into batches., In addition, can be used to limit the amount of memory outside of the JE cache used by preload. [#15260] [#18153] [#19306]

  3. The JE environment can now be spread across multiple subdirectories. Environment subdirectories may be used to spread an environment's *.jdb files over multiple directories, and therefore over multiple disks or file systems. Environment subdirectories reside in the environment home directory and are named data001/ through dataNNN/, consecutively, where NNN is the value of je.log.nDataDirectories. A typical configuration would be to have each of the dataNNN/ names be symbolic links to actual directories which each reside on separate file systems or disks.

    Environment subdirectories are enabled through the je.log.nDataDirectories environment parameter. If 0 (the default), all log files (*.jdb) will reside in the environment home directory passed to the Environment constructor. A non zero value indicates the number of environment subdirectories to use for holding the environment's log files.

    If data subdirectories are used (i.e. je.log.nDataDirectories > 0), this parameter must be set when the environment is initially created. Like the environment home directory, each and every one of the dataNNN/ subdirectories must also be present and writable. This parameter must be set to the same value for all subsequent openings of the environment or an exception will be thrown.

    If the set of existing dataNNN/ subdirectories is not equivalent to the set { 1 ... je.log.nDataDirectories } when the environment is opened, an EnvironmentFailureException will be thrown, and the Environment will fail to be opened.

    DbBackup.getLogFilesInBackupSet() now returns the subdirectory name and file separator prepended to the file name if je.log.nDataDirectories > 0. [#19125]

  4. A new class, lets the HA application add more application specific information to the notion of node state in a replication group. It's meant to support more application specific semantics when assessing the availability of a given node member. [#18046]

  5. New options have been added for changing the host of a JE replication node, and for moving a JE replication group. See the Utilities section.

  6. Applications may now specify a custom java.util.logging.Handler per Environment. Logging messages generated by JE will go to this handler. See EnvironmentConfig.setLoggingHandler() and the memo Using JE trace logging. [#19110]

  7. Replicated nodes can now be opened in UNKNOWN state, to support read only operations in a replicated system when a master is not available. Prior to JE 5, replicated nodes could only be opened in MASTER or REPLICA state, and if a master could not be elected, the node could not be opened. This is enabled through the new configuration parameter: ReplicationConfig.ENV_UNKNOWN_STATE_TIMEOUT. Please review its javadoc for further details. [#19338]

  8. The master node in a replication group now rebroadcasts election results on a periodic basis, to help restore normal functioning of a replication group after a network partition has been resolved. The default period is a minute and can be modified by the new ReplicationConfig parameter ReplicationConfig.ELECTIONS_REBROADCAST_PERIOD. Please review its javadoc for further details. [#20220]

  9. A new listener mechanism is available to give the application feedback about potentially long running activities such as environment startup (recovery), replication stream syncup, and database preload. See and its related enums,, and [#20043]

  10. New methods were added to allow quickly skipping over a specified number of key/value pairs using a cursor. For details, see the javadoc for Cursor.skipNext and Cursor.skipPrev. [#19165]

  11. A per-Environment ClassLoader may now be configured and will be used by JE for loading all user-supplied classes, including btree comparators, duplicate comparators, class instances serialized by SerialBinding, and DPL persistent classes. This is useful when separate ClassLoaders are used for the JE jar file and the application's classes, for example, when running under an application server framework. The ClassLoader is configured using EnvironmentConfig.setClassLoader. Related changes are: [#18368]

  12. The interface is now implemented by all JE classes and interfaces with a public void close() method. This allows using these objects with the Java 1.7 try-with-resources statement, for applications compiled and run with Java 1.7 or later. See AutoCloseable, which is a superinterface of in Java 1.7.

    The following JE classes and interfaces now implement Closeable, and on Java 1.7 AutoCloseable.


  13. The Environment.flushLog method has been added. It can be used to make durable, by writing to the log, all preceding non-transactional write operations, as well as any preceding transactions that were committed with no-sync durability. To flush buffered data for durability reasons, with the addition of this method it is no longer necessary to perform a checkpoint, call Environment.sync, or commit a transaction (with sync or write-no-sync durability). [#19111]

API Changes:

  1. Made the EnvironmentConfig and ReplicationConfig classes Serializable. [#19241]

  2. The je.env.fairLatches environment parameter has been deprecated and no longer has any effect.

  3. The behavior, although not the syntax or intent, of EnvironmentConfig.CHECKPOINTER_BYTES_INTERVAL has changed. Previously, this interval defined the byte distance between the end of one checkpoint and the start of the next. Now it defines the byte distance between the start of one checkpoint and start of the next. In other words, now the interval includes the checkpoint itself, which in some cases can be large. This now more accurately reflects the intention of the parameter, which is to bound the recovery interval, which is proportional to the time to recover (open the Environment) after a crash. It does mean, however, that checkpoints may occur more often for the same configured interval, and some applications may wish to adjust their configured setting accordingly. [#19704]

  4. Cursor.getSearchBothRange for a non-duplicates database has been corrected to behave exactly as Cursor.getSearchBoth. Previously getSearchBothRange returned a data item that was greater or equal to the data search parameter, which was incorrect. Now it only returns a data item that is equal to the data search parameter. [#19165]

  5. See the description of the new ProgressListener class in the New Features section above.

  6. The default value for EnvironmentConfig.CLEANER_LAZY_MIGRATION has been changed from true to false. Over several releases the benefits of setting this to true have decreased and are now less than the benefits of setting it to false. See the javadoc for this parameter for details. [#20588]

Performance and other General Changes

  1. Performance of record update and deletion operations has been significantly improved when the record is not in the JE cache and the application does not need to read the record prior to performing the update or deletion. Previously, the old version of the record was always read into cache, if not already present, by the update or deletion operation. If the record was not already in cache, this often resulted in an expensive random I/O. Now, because of internal changes to record locking, records are not read by update or deletion operations, and this can significantly reduce random I/O for delete or update-heavy workloads. An exception to this rule is when a record is updated or deleted in a primary database that has associated secondary indices. In this case, the primary record must be read in order to update the secondary indices.

    As a result of this change, the log cleaner must now sometimes estimate the size of records that are made obsolete by updates and deletions, and must sometimes "probe" a log file to determine record sizes. Several statistics have been added to show this activity in the class:

    The DbSpace utility now also prints the first two of these new statistics.


  2. Made an internal format change for databases with duplicate keys that improves operation performance, reduces memory and disk overhead, and increases concurrency. The format change requires that an upgrade utility be run before opening an environment with this release, if any of its databases are configured for duplicates or a DPL secondary key with a MANY_TO_XXX relationship is used. The upgrade procedure is described at the end of this note.

    Internal Format Change

    This information is included for users who understand Btree internals and wish to know what changed internally.

    Performance Improvements

    Other Behavioral Changes

    Upgrade Procedure

    Because of this format change, an environment with databases configured for duplicates must convert their environment with a utility program prior to opening the environment with this release. A database might be explicitly configured for duplicates using DatabaseConfig.setSortedDuplicates(true), or implicitly configured for duplicates by using a DPL MANY_TO_XXX relationship (Relationship.MANY_TO_ONE or Relationship.MANY_TO_MANY) .

    One of two utility programs must be used, which are only available in the release package for JE 4.1.10, or a later release of JE 4.1. If you are currently running a release earlier than JE 4.1.10, then you must download the latest JE 4.1 release package in order to run these utilities.

    The steps for upgrading are as follows.

    1. Stop the application using BDB JE.
    2. Run the DbPreUpgrade_4_1 or DbRepPreUpgrade_4_1 utility. If you are using a regular non-replicated Environment:
          java -jar je-4.1.10.jar DbPreUpgrade_4_1 -h <dir>
      If you are using a JE ReplicatedEnvironment:
          java -jar je-4.1.10.jar DbRepPreUpgrade_4_1
               -h <dir>
               -groupName <group name>
               -nodeName <node name>
               -nodeHostPort <host:port>
    3. Finally, start the application using the current JE 5.0 (or later) release of BDB JE.

    The second step -- running the utility program -- does not perform data conversion. This step simply performs a special checkpoint to prepare the environment for upgrade. It should take no longer than an ordinary startup and shutdown.

    During the last step -- when the application opens the JE environment using the current release -- all databases with duplicates will automatically be converted before the Environment or ReplicatedEnvironment constructor returns. The conversion only rewrites internal nodes in the Btree, not leaf nodes. In a test with a 500 MB cache, conversion of a 10 million record data set (8 byte key and data) took between 1.5 and 6.5 minutes, depending on number of duplicates per key. The high end of this range is when 10 duplicates per key were used; the low end is with 1 million duplicates per key.

    To make the conversion predictable during deployment, users should measure the conversion time on a non-production system before upgrading a deployed system. When duplicates are converted, the Btree internal nodes are preloaded into the JE cache. A new configuration option, EnvironmentConfig.ENV_DUP_CONVERT_PRELOAD_ALL, can be set to false to optimize this process if the cache is not large enough to hold the internal nodes for all databases. For more information, see the javadoc for this property. [#19165]

  3. An improvement has been made that requires significantly less writing per checkpoint, less writing during eviction, and less metadata overhead in the JE on-disk log files.

    Previously, delta log entries for bottom internal nodes, called BINDeltas, were written by checkpoints rather than writing full BINs, and full BINs were written less frequently (see EnvironmentConfig.TREE_MAX_DELTA and TREE_BIN_DELTA ). This is still the case. However, the approach taken previously required that the same delta information be repeatedly written at each checkpoint, even it had not changed since the last checkpoint. Now, delta information is only written when it changes.

    By significantly reducing writing, the new approach provides overall performance improvements. However, there is also an additional cost to the new approach: When a BIN is not in cache, fetching the BIN now often requires two random reads instead of just one; one read to fetch the BINDelta and another to fetch the last full BIN. For applications where all active BINs fit in cache, this adds to the I/O cost of initially populating the cache. For applications where active BINs do not fit in cache, this adds to the per-operation cost of fetching a record (an LN) when its parent BIN is not in cache. In our tests, the lower write rate more than compensates for the additional I/O of fetching the BINDelta, but the benefit is greatest when all BINs fit in cache.


  4. Improvements were made to recovery (Environment open) performance by changing the behavior of checkpoints in certain cases. Recovery should always be very quick after the following types of checkpoints: In addition, a problem was fixed where periodic checkpoints (performed by the checkpointer thread or by calling Environment.checkpoint) would cause long recovery times under certain circumstances.

    As a part of this work, the actions invoked by ReplicatedEnvironment.shutdownGroup() were streamlined to use the setMinimizeRecoveryTime() option and to reduce spurious timeouts during the shutdown processing. [#19559]

  5. Node fanouts (see DatabaseConfig.setNodeMaxEntriess) are now mutable and persistent database attributes. They were previously permitted to mutate, but the changed attribute value wasn't saved persistently, so the new value might sometimes revert to the previously existing setting. This has been fixed. In addition, the javadoc for DatabaseConfig has been expanded to clarify what attributes are mutable vs. fixed, persistent vs temporary. [#18262]

  6. The internal Database ID field has been enlarged from a 32-bit (int) quantity to a 64-bit (long) quantity. A Database ID value is assigned from a single sequence in each Environment for each Database created and truncated. This change therefore increases the total number of Databases that can be created and the number of truncate operations that can be performed over the lifetime of an Environment. Note that one bit of the ID is used to distinguish replicated from local databases, so the total number of Databases and truncate operations per Environment is now effectively 2^63. [#18540]

  7. Fixed a bug that replicated parameters in can't be recognized if opening a read only standalone Environment on a replicated Environment home. [#19080]

  8. Added Implementation-Title, Implementation-Version, Implementation-Vendor, Implementation-URL entries to the je.jar MANIFEST.MF file. [#19320]

  9. Added a check to make sure that a log write buffer is never larger than the size of a log file. [#19324]

  10. Fix a problem that caused high CPU utilization during log cleaning, as in the following stack trace.
    "Cleaner-5" daemon prio=10 tid=0x00002aaae8008800 nid=0xaeb runnable [0x0000000042ac9000]
    java.lang.Thread.State: RUNNABLE
    locked <0x00002aaab0652408> (a

  11. Added support for truncating and removing a single database in the same transaction. Previously, if this was attempted the Environment.removeDatabase method this would throw an exception, and subsequently the application would have to abort the transaction. Now, this is allowed. [#19636]

  12. Added EnvironmentConfig.TREE_COMPACT_MAX_KEY_LENGTH for user configuration of the in-memory compaction of keys in the Btree. Previously, in-memory keys were compacted but the key size threshold was fixed at 16 bytes. Now the key size threshold is configurable and has a 16 byte default value. For more information, see the EnvironmentConfig.TREE_COMPACT_MAX_KEY_LENGTH javadoc. [#20120]

  13. Fixed a problem where a Database handle kept a reference to the environment after it was closed. Added the following warning to the close() method javadoc for all JE handles.
        WARNING: To guard against memory leaks, the application should discard
        all references to the closed handle.  While BDB makes an effort to discard
        references from closed objects to the allocated memory for an environment,
        this behavior is not guaranteed.  The safe course of action for an
        application is to discard all references to closed BDB objects.

  14. The JE HA Monitor, is now more proactive about discovering group status changes that occur while it is has no network connectivity. The monitor could miss replication group changes if it was down, or isolated due to a network partition. In JE 4.1, the Monitor would not receive that information, and would not be able to alert the application. In JE 5, the Monitor periodically and proactively checks replication group status in order to update its own notion of group status, and will send the appropriate notifications to the application.

  15. Fixed a problem arising from a network partition event, that would result in an unnecessary rollback of committed transactions. This problem typically manifests itself in the application receiving a RollbackProhibitedException if the number of transactions exceeds je.rep.txnRollbackLimit. The following describes a scenario leading to the problem:

    1. Consider a three node replication group: A, B, C with A as the master and nodes: B, C serving as replicas.
    2. A network partition isolates A from B and C, resulting in the partitions: (A) and (B,C).
    3. B and C hold and election and B becomes the master.
    4. There are now two masters: the pre-existing A which is isolated and cannot perform durable writes, and a newly elected B.
    5. The majority side (B,C) is accessible and continues to make progress performing durable writes.
    6. The master on the majority side B goes down. There is now no master on the (B,C) side since there is no quorum.
    7. Some time later, the partition is healed.
    8. C now sees A as an established master and syncs with it potentially rolling back committed transactions as a result.

    The fix changes the final step above so that an election is held before the rollback is allowed to proceed. The election results in C being elected the new master. Node A encounters a MasterReplicaTransitionException which it must handle by closing and re-opening its environment handle so it can resume operations as a replica. [#20258] [#20572]

  16. Fix a bug where a temporary database record was unnecessarily locked during log cleaning, and this also caused an assertion to fire in FileProcessor.processFoundLN. This only occurs when temporary databases are used and Environment.CLEANER_LAZY_MIGRATION is set to false, which is now the default in JE 5. [#20670]

  17. Fixed a bug seen in replicated environments which would manifest as the following stack trace. This was a transient problem and there was no corruption to the persistent data, but the application would be required to close and reopen the ReplicatedEnvironment instance.
    (JE 5.0.30) node2(2):/tmp/scaleDir2/env Couldn't find bucket for GTE VLSN 299,617,391 in database.
     EndBucket =  
      tracker = first=298,966,283 last=299,617,477 sync=299,617,458
             txnEnd=299,617,458  firstTracked=-1 lastOnDiskVLSN=299,617,477 
    UNEXPECTED_STATE_FATAL: Unexpected internal state, unable to continue. 
    Environment is invalid and must be closed.

  18. Fixed a bug in the processing of internal BDBJE metadata which would result in the following stack trace when a replicated environment was re-opened:
    java.lang.IndexOutOfBoundsException: Index: 110, Size: 110
        at java.util.ArrayList.RangeCheck(
        at java.util.ArrayList.get(
    There was no corruption to the persistent data, but without this fix, the environment would repeatedly fail to open. [#20796]

  19. Fix a bug where key prefixing (DatabaseConfig.setKeyPrefixing) was not effective when keys are inserted in sequential order, for example, during a bulk load in key order. Without any updates, deletions or non-sequential insertions, the prefix information was not persistent and therefore not used after closing and re-opening the Environment. In addition, during sequential insertion more cache space than necessary was used under certain circumstances. These problems did not occur with non-sequential insertion. [#20799]

  20. Improved DbCacheSize utility to take into account memory management enhancements and improve accuracy. Added support for key prefixing (-keyprefix), databases configured for sorted duplicates (-duplicates), and replicated environments (-replicated). Environment config params and replication config params may also be specified, since they impact memory usage as well.

    The old -density argument has been replaced by -orderedinsertion. The old -overhead argument has been removed, and the utility prints the minimum environment memory overhead.

    See the DbCacheSize javadoc for more information. It now includes a discussion of how cache memory is used and how it can be reduced using environment and database configuration options. [#20145]

  21. Fix a bug where an endless loop occurs when calling a SecondaryCursor method that moves the cursor to an existing record. This occurs when both of the following conditions are true:

    1. The cursor operation is performed with READ_UNCOMMITTED isolation, via use of a LockMode or CursorConfig with this setting.
    2. The secondary index has been corrupted, for example, by not using transactions to update the primary or secondary.

    Thanks to user9057793 for reporting this on OTN. [#20822]

  22. Fix a problem where methods that return the database count (Database.count, EntityIndex.count, StoredMap.size, etc) would sometimes an incorrect value due to concurrent JE background activity (log cleaning or IN compression, for example). The API contract for the count methods is that they return a correct value as long as there is no application write activity. [#20798]

Direct Persistence Layer (DPL), Collections and Bind packages

  1. New bindings for sorted, or naturally ordered, packed integers are now available. The new bindings allow using packed integers in record keys, and provide natural sort order without a custom comparator. The API additions are:

    See the com.sleepycat.bind.tuple package description for an overview of the new bindings and a comparative description of all tuple bindings. [#18379]

  2. Two bindings for the java.math.BigDecimal data type are now available.

    See the com.sleepycat.bind.tuple package description for an overview of the new bindings and a comparative description of all tuple bindings. [#18379]

  3. java.math.BigDecimal is now defined as a built-in DPL simple data type. This means that BigDecimal values may be stored in DPL persistent objects, and fields of type BigDecimal may be defined as primary or secondary keys. The sorted BigDecimal format is always used in the DPL, and this provides natural sort order without a custom comparator. However, trailing zeros after the decimal place are stripped, meaning that precision is not preserved.

    If the application has previously defined a PersistentProxy for BigDecimal, special considerations are necessary when upgrading to this release:


  4. Thanks to a contribution from Dave Brosius, a built-in proxy has been added for the java.util.LinkedHashMap. LinkedHashMap may now be used as the type of any persistent field. [#20055]

  5. Fixed a problem with MANY_TO_ONE and MANY_TO_MANY secondary index ordering when a Comparable key class is used for the primary key.

    In general, for MANY_TO_ONE and MANY_TO_MANY secondary indexes, more than one entity may have the same secondary key. When iterating over entities in secondary key order, the ordering of entities is by primary key within secondary key. For example, if Employee entities with an integer ID primary key are iterated using a String department secondary key, the iteration order is by integer ID within (grouped by) department.

    In prior releases, when a Comparable key class is used for the primary key, the Comparable defines the order of entities in the primary index. However, the Comparable.compareTo method is not used to determine primary key ordering within secondary key. Instead, the natural order of the primary keys is used. In our example, if the primary key class implements Comparable to order entities in decreasing integer order, the iteration order is incorrectly in natural integer order (increasing) within department. This has been fixed and the order for newly created secondary indexes now uses the Comparable, and in our example the iteration order is now decreasing integer order within department.

    However, because the ordering of an existing index cannot be changed, the old ordering will apply for secondary indexes created prior to this release. To cause the correct ordering to be used for an existing index, you must delete the database for the secondary index. The next time the index is opened (via EntityStore.getSecondaryIndex) the index will be regenerated in the correct order. To delete the index database, first determine the database name; see the Database Names section of the EntityStore javadoc. Then, before opening the EntityStore, delete the index database using Environment.removeDatabase. [#17252]

  6. Fix a bug where if a null value was stored using the Collections API, it could not be read. An exception such as the following would occur:
    at com.sleepycat.bind.serial.SerialBinding.entryToObject(
    at com.sleepycat.collections.DataView.makeValue(
    at com.sleepycat.collections.DataCursor.getCurrentValue(
    at com.sleepycat.collections.StoredContainer.getValue(
    at com.sleepycat.collections.StoredMap.get(
    Now null values can be stored and read, as long as the value binding supports null values. Note that null values are not supported when entity bindings are used, such as when using the DPL. Thanks to 'annie' on OTN for reporting the problem. [#18633]

  7. A field in a DPL entity may now refer to its enclosing entity object. Previously, an IllegalArgumentException was thrown in this situation. Note that references to other entities (not the enclosing object) are not permitted. Thanks to Trevor (tkram01) on OTN for reporting the problem. [#17525]

  8. Fix a bug where adding new secondary keys into an abstract entity class caused an error. An exception such as the following would occur: java.lang.InstantiationException UNEXPECTED_EXCEPTION:
    Unexpected internal Exception, may have side effects.
    at com.sleepycat.compat.DbCompat.unexpectedException(
    at com.sleepycat.persist.impl.ReflectionAccessor.newInstance(
    at com.sleepycat.persist.impl.ComplexFormat.checkNewSecKeyInitializer(
    at com.sleepycat.persist.impl.ComplexFormat.initialize(
    at com.sleepycat.persist.impl.Format.initializeIfNeeded(
    at com.sleepycat.persist.impl.PersistCatalog.init(
    at com.sleepycat.persist.impl.PersistCatalog.(
    at com.sleepycat.persist.impl.Store.(
    at com.sleepycat.persist.EntityStore.(
    Now adding new secondary keys into abstract entity classes is allowed in the DPL. Thanks to user 786189 on OTN for reporting the problem. [#19358]

  9. An IllegalStateException is now thrown when calling EntityStore.setSequenceConfig and the sequence has already been opened via a call to EntityStore.getPrimaryIndex. This is the behavior previously specified in the javadoc for setSequenceConfig. Thanks to patriciaG on OTN for reporting the problem. [#19356]

  10. An exception will be thrown when storing an enum class with constant-specific methods by DPL:
    java.lang.IllegalArgumentException: Class could not be loaded or is not persistent.
    Now storing an enum class with constant-specific methods is allowed in the DPL. Thanks to Mikhail Barg on OTN for reporting the problem. [#18357]

  11. Allows to register enum or array types by EntityModel.registerClass. This new feature will be useful when enum or array classes are unknown for DPL but will be used in converter mutation.

    Also collects the related formats for the Map or Collection types in the FieldInfo.collectRelatedFormats, e.g., creates an EnumFormat for MyEnum class when meeting Map<String, MyEnum>.

    Thanks to James on OTN for reporting the problem. [#19377]

  12. The performance of serializing/deserializing String data in DPL has been improved by treating String type as a primitive type. Treating String type as a primitive type in DPL does not need to stored the format ID, and directly uses TupleInput.readString/TupleOutput.writeString to serialize/deserialize String data.

    In our benchmark, there are 500,000 records containing only String fields, and the size of each record is 180 bytes. With such changes, the reading performance gains 14% improvement, and the writing performance gains 10% improvement. [#19247]

  13. PrimaryIndex.get operation has been improved by avoiding deserialize the known primary key when deserializing an entity. Instead, the known primary key is directly set to the primary key field by the accessor.

    In our benchmark, there are 500,000 records. The primary key in each record is a composite key containing an integer field and a String field, and the size of each record is 180 bytes. With such change, the performance of PrimaryIndex.get operation (reads 500,000 records, uses pre-load mode to avoid I/O process) gains nearly 10% improvement.

    The improvement will be more significant if there is a large and complex primary key and small data. [#19248]

Utility Changes:

  1. A fix has been made for a bug which allowed an Environment to be closed while a DbBackup was in progress. This could cause a checksum exception (but not data loss). An EnvironmentFailureException is now thrown by Environment.close() if the last open Environment handle is closed between calls to DbBackup.startBackup() and DbBackup.endBackup(). [19207]

  2. The DbGroupAdmin utility and ReplicationGroupAdmin class now provide a new updateAddress() method which lets the user change the hostname and port of a member of a JE replication group. This new functionality is useful when a node must be moved to a new host. [#18632]

  3. A new utility,, has been added to reset the members of a replication group, replacing the group with a new group consisting of a single new member. This utility is useful when a copy of an existing replicated environment needs to be used at a different site, with the same data, but with a different initial node that can be used to grow the replication group as usual, such as may be the case when an application is moved from a staging to a production environment. The utility can also be used to change the group name associated with the environment. [#19886]

Documentation, Installation and Integration:

  1. A new upgrade how-to section for replicated applications, "Upgrading a JE Replication Group," is now available in the Administration chapter (Chapter 7) of the "Getting Started with JE High Availability" guide.

  2. Several additions to the backup process were made to the DbBackup javadoc.
    1. A checkpoint should be performed before calling startBackup, to reduce recovery time after a restore.
    2. Log files should be verified before being copied to the backup set.
    3. A list of the current files in the environment should be obtained and used to avoid unused files after a restore.
    4. Incremental backups are now documented in more detail.
    See the DbBackup javadoc for details. [#19894]

  3. Added javadoc in several places recommending the use of compressed oops, as well as a warning that it will not be honored by JE (and JE cache memory will be wasted) unless it is specified explicitly on the Java command line. For example, see EnvironmentConfig.setCacheSize.