Oracle® TimesTen In-Memory Database Operations Guide 11g Release 2 (11.2.2) E21633-12 |
|
|
View PDF |
TimesTen supports transactions that provide atomic, consistent, isolated and durable (ACID) access to data. The following sections describe how you can configure transaction features.
All operations on a TimesTen database, even those that do not modify or access application data, are executed within a transaction. When running an operation and there is no outstanding transaction, one is started automatically on behalf of the application. Transactions are completed by an explicit or implicit commit or rollback. When completed, resources that were acquired or opened by the transaction are released and freed, such as locks and cursors.
Use the following SQL statements to commit or rollback your transaction:
The SQL COMMIT
statement commits the current transaction. Updates made in the transaction are made available to concurrent transactions.
The SQL ROLLBACK
statement rolls back the current transaction. All updates made in the transaction are undone.
Note:
For the syntax of theCOMMIT
and ROLLBACK
statements, see "SQL Statements" in the Oracle TimesTen In-Memory Database SQL Reference.Read-only transactions do not require a commit. When executing write operations, complete transactions to release locks. When possible, keep write transactions short in duration. Any long-running transactions can reduce concurrency and decrease throughput because locks are held for a longer period of time, which blocks concurrent transactions. Also, long-running transactions can prevent transaction log files from being purged, causing these files to accumulate on disk.
A connection can have only one outstanding transaction at any time and cannot be explicitly closed if it has an open transaction.
The following sections describe how you can configure whether the application enables implicit commit behavior or requires explicit commit behavior for DML or DDL statements:
Autocommit configures whether TimesTen issues an implicit commit after DML or DDL statements. By default, autocommit is enabled, following the ODBC and JDBC specifications.
When autocommit is on, the following behavior occurs:
An implicit commit is issued immediately after a statement executes successfully.
An implicit rollback is issued immediately after a statement execution fails, such as a primary key violation.
If the statement generates a result set that opens a cursor, the automatic commit is not issued until that cursor and any other open cursors in the transaction have been explicitly closed. Any statements executed while a cursor is open is not committed until all cursors have been closed.
Fetching all rows of a result set does not automatically close its cursor. After the result set has been processed, its cursor must be explicitly closed if using the read committed isolation level or the transaction must be explicitly committed or rolled back if using Serializable isolation level.
Note:
Even with durable commits and autocommit enabled, you could lose work if there is a failure or the application exits without closing cursors.If you are using ODBC or JDBC batch operations to INSERT
, UPDATE
or DELETE
several rows in one call when autocommit is on, a commit occurs after the entire batch operation has completed. If there is an error during the batch operation, those rows that have been successfully modified are committed within this transaction. If an error occurs due to a problem on a particular row, only the successfully modified rows preceding the row with the error are committed in this transaction. The pirow
parameter to the ODBC SQLParamOptions
function contains the number of the rows in the batch that had a problem.
Commits can be costly for performance and intrusive if they are implicitly executed after every statement. TimesTen recommends you disable autocommit so that all commits are intentional. Disabling autocommit provides control over transactional boundaries, enables multiple statements to be executed within a single transaction, and improves performance, since there is no implicit commit after every statement.
If autocommit is disabled, transactions must be explicitly completed with a commit or rollback after any of the following:
Completing all the work that was to be done in the transaction.
Issuing a transaction-consistent (blocking) checkpoint request.
Updating column and table statistics to be used by the query optimizer.
Calling a TimesTen built-in procedure that does not generate a result set in order for the new setting specified in the procedure to take effect, such as the ttLockWait
procedure.
You must establish a connection to a database before changing the autocommit setting. To disable autocommit, perform one of the following:
Traditionally, in TimesTen databases, DDL statements are executed as part of the current transaction and are committed or rolled back along with the rest of the transaction. However, the default behavior for the Oracle Database is that it issues an implicit COMMIT
before and after any DDL statement.
You can configure for either behavior with the DDLCommitBehavior
connection attribute, as follows:
0 - Oracle Database behavior. An implicit transaction commit is performed before the execution of each DDL statement and a durable commit is performed after the execution of each DDL statement. This is the default.
1 - Traditional TimesTen behavior. Execution of DDL statements does not trigger implicit transaction commits.
DDL statements include the following:
CREATE
, ALTER
and DROP
statements for any database object, including tables, views, users, procedures and indexes.
TRUNCATE
GRANT
and REVOKE
The consequences of setting DDLCommitBehavior=0
include the following:
DDL changes cannot be rolled back.
DDL statements delete records from global temporary tables unless the tables were created with the ON COMMIT PRESERVE ROWS
clause.
Tables created with the CREATE TABLE
. . . AS SELECT
statement are visible immediately.
TRUNCATE
statements are committed automatically. However, the truncate of the parent and child tables must be truncated in separate transactions, with the child table truncated first. You cannot truncate a parent table unless the child table is empty. The truncation of child and parent table can only be in the same transaction if you set DDLCommitBehavior
to 1.
For more information, see "DDLCommitBehavior" in the Oracle TimesTen In-Memory Database Reference.
Both autocommit and DDLCommitBehavior
configure if and when implicit commits occur for SQL statements.
Autocommit applies to both DDL and DML statements. Enabling for implicit commits of DDL statements overlaps in both options. If autocommit is enabled and DDLCommitBehavior
is disabled, autocommit only commits after the DDL statement. However, if both autocommit and DDLCommitBehavior
is enabled, an implicit commit occurs both before and after the DDL statement.
To enable DDLCommitBehavior
, you set the DDLCommitBehavior
DSN attribute. To enable or disable autocommit, the application executes an ODBC function or JDBC method.
Table 7-1 shows what behavior occurs when you enable or disable one option in conjunction with the other:
Table 7-1 Relationship between autocommit and DDLCommitBehavior
Autocommit | DDLCommitBehavior | Relationship |
---|---|---|
ON |
ON |
All statements are automatically committed, unless you have an open cursor. DDL statements are implicitly committed before and after execution. |
OFF |
ON |
Recommended setting. DDL statements are implicitly committed before and after execution. All other statements require an explicit commit. |
ON |
OFF |
All statements are implicitly committed after execution, unless you have an open cursor. A commit is issued after the DDL is processed and not before. |
OFF |
OFF |
All statements require an explicit commit, including DDL statements. |
As a relational database, TimesTen is ACID compliant:
Atomic: All TimesTen transactions are atomic: Either all database operations in a single transaction occur or none of them occur.
Consistent: Any transaction can bring the database from one consistent state to another.
Isolated: Transactions can be isolated. TimesTen has two isolation levels: read committed and serializable, which together with row level locking provide multi-user concurrency control.
Durable: Once a transaction has been committed, it remains committed.
The following sections detail how TimesTen ensures ACID semantics for transactions:
Locking and transaction logs are used to ensure ACID semantics as a transaction modifies data in a database as follows:
Locking: TimesTen acquires locks on data items that the transaction writes and, depending on the transaction isolation level, data items that the transaction reads. See "Concurrency control through isolation and locking".
Transaction logging: All TimesTen transactions are atomic. Either all or none of the effects of the transaction are applied to the database. Modifications to the database are recorded in a transaction log. Atomicity is implemented by using the transaction log to undo the effects of a transaction if it is rolled back. Rollback can be caused explicitly by the application or during database recovery because the transaction was not committed at the time of failure. See "Transaction logging".
The following table shows how TimesTen uses locks and transaction logs:
If | Then |
---|---|
Transaction is terminated successfully (committed) |
|
Transaction is rolled back |
|
System fails (data not committed) |
|
Application fails |
|
TimesTen supports temporary databases, which have essentially no checkpoints. However, they do have a transaction log so that transactions can be rolled back. Recovery is never performed for such databases. They are destroyed after a database or application shuts down or fails. For information on temporary databases, see "Database overview".
The TimesTen Data Manager provides consistency and durability with a combination of checkpointing and transaction logging.
A checkpoint operation writes the current in-memory database image to a checkpoint file on disk that has the effect of making all transactions that have been committed at the time of the checkpoint operation consistent and durable.
All transactions are logged to an in-memory transaction log buffer, which is written to disk in one of the following ways:
Guaranteed durability through a durable (synchronous) commit
Delayed durability through a non-durable (asynchronous) commit:
Note:
Checkpointing and logging are further described in "Checkpoint operations" and "Transaction logging".TimesTen transactions support ANSI Serializable and ANSI Read Committed levels of isolation. ANSI Serializable isolation is the most stringent transaction isolation level. ANSI Read Committed allows greater concurrency. Read Committed is the default and is an appropriate isolation level for most applications.
The following sections describe transaction isolation and locking levels:
Transaction isolation enables each active transaction to operate as if there were no other transactions active in the system. Isolation levels determine if row-level locks are acquired when performing read operations. When a statement is issued to update a table, locks are acquired to prevent other transactions from modifying the same data until the updating transaction completes and releases its locks.
The Isolation
connection attribute sets the isolation level for a connection. Isolation levels have no effect if using database-level locking because transactions cannot be run concurrently. The isolation level cannot be changed in the middle of a transaction.
TimesTen supports the following two transaction isolation levels:
ANSI Read Committed isolation: The read committed isolation level is the recommended mode of operation for most applications, and is the default mode. It enables transactions that are reading data to execute concurrently with a transaction that is updating the same data. TimesTen makes multiple versions of data items to allow non-serializable read and write operations to proceed in parallel.
Read operations do not block write operations and write operations do not block read operations, even when they read and write the same data. Read operations do not acquire locks on scanned rows. Write operations acquire locks that are held until the transaction commits or rolls back. Readers share a committed copy of the data, whereas a writer has its own uncommitted version. Therefore, when a transaction reads an item that is being updated by another in-progress transaction, it sees the committed version of that item. It cannot see an uncommitted version of an in-progress transaction.
Read committed isolation level provides for better concurrency at the expense of decreased isolation because of the possibility of non-repeatable reads or phantom rows within a transaction. If an application executes the same query multiple times within the same transaction, the commit of an update from another transaction may cause the results from the read operation to retrieve different results. A phantom row appears in modified form in two different reads, in the same transaction, due to early release of read locks during the transaction.
To set read committed isolation level, if previously modified since this is the default, do one of the following:
ODBC applications execute the SQLSetConnectOption
ODBC function with the SQL_TXN_ISOLATION
flag set to SQL_TXN_READ_COMMITTED
.
Connect with isolation=1
in the connection string.
When using ttIsql
, execute ISOLATION 1
or ISOLATION READ_COMMITTED
.
ANSI Serializable isolation: All locks acquired within a transaction by a read or write operation are held until the transaction commits or rolls back. Read operations block write operations, and write operations block read operations. As a result, a row that has been read by one transaction cannot be updated or deleted by another transaction until the original transaction terminates. Similarly, a row that has been inserted, updated or deleted by one transaction cannot be accessed in any way by another transaction until the original transaction terminates.
Serializable isolation level provides for repeatable reads and increased isolation at the expense of decreased concurrency. A transaction that executes the same query multiple times within the same transaction is guaranteed to see the same result set each time. Other transactions cannot update or delete any of the returned rows, nor can they insert a new row that satisfies the query predicate.
To set the isolation level to Serializable, do one of the following:
ODBC applications execute the SQLSetConnectOption
ODBC function with the SQL_TXN_ISOLATION
flag set to SQL_TXN_SERIALIZABLE
.
Connect with isolation=0
in the connection string.
When using ttIsql
, execute isolation 0
or isolation serializable
.
To ensure that materialized views are always in a consistent state, all view maintenance operations are performed under Serializable isolation, even when the transaction is in read committed isolation. This means that the transaction obtains read locks for any data items read during view maintenance. However, the transaction releases the read locks at the end of the INSERT
, UPDATE
or CREATE VIEW
statement that triggered the view maintenance, instead of holding them until the end of the transaction.
Note:
ThettXactAdmin
utility generates a report showing lock holds and lock waits for all outstanding transactions. It can be used to troubleshoot lock contention problems where operations are being blocked, or encountering lock timeout or deadlock errors. It can also be used to roll back a specified transaction.TimesTen supports row-level locks, table-level locks and database-level locks:
Note:
Different connections can coexist with different levels of locking, but the presence of even one connection using database-level locking leads to reduced concurrency. For performance information, see "Choose the best method of locking".Row-level locking: Transactions usually obtain locks on the individual rows that they access. Row-level locking is the recommended mode of operation because it provides the finest granularity of concurrency control. It allows concurrent transactions to update different rows of the same table. However, row-level locking requires space in the database's temporary memory region to store lock information.
Row-level locking is the default. However, if it has been modified to another type of locking and you want to re-enable row-level locking, do one of the following:
Set the LockLevel
connection attribute to 0.
Call the ttLockLevel
built-in procedure with the lockLevel
parameter set to Row
. This procedure changes the lock level between row-level and database-level locking on the next transaction and for all subsequent transactions for this connection.
Execute the ttOptSetFlag
procedure to set the RowLock
parameter to 1, which enables the optimizer to consider using row locks.
Note:
See "LockLevel," "ttLockLevel," and "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference for more information.Table-level locking: Table-level locking is recommended when concurrent transactions access different tables or a transaction accesses most of the rows of a particular table. Table-level locking provides better concurrency than database-level locking. Row-level locking provides better concurrency than table-level locking. Table-level locking requires only a small amount of space in the temporary memory region to store lock information.
Table-level locking provides the best performance for the following:
Queries that access a significant number of rows of a table
When there are very few concurrent transactions that access a table
When temporary space is inadequate to contain all row locks that an operation, such as a large insert or a large delete, might acquire
To enable table-level locking, execute the ttOptSetFlag
procedure to set the TblLock
parameter to 1, which enables the optimizer to consider using table locks. In addition, set RowLock
to 0 so that the optimizer does not consider row-level locks.
If both table-level and row-level locking are disabled, TimesTen defaults to row-level locking. If both table-level and row-level locking are enabled, TimesTen chooses the locking scheme that is more likely to have better performance. Even though table-level locking provides better performance than row-level locking because of reduced locking overhead, the optimizer often chooses row-level locking for better concurrency. For more information, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference.
Note:
When multiple locks have been obtained within the same transaction, the locks are released sequentially when the transaction ends.Database-level locking: Database-level locking serializes all transactions, which effectively allows no concurrency on the database. When a transaction is started, it acquires an exclusive lock on the database, which ensures that there is no more than one active transaction in the database at any given time. It releases the lock when the transaction is completed.
Database-level locking often provides better performance than row-level locking, due to reduced locking overhead. In addition, it provides higher throughput than row-level locking when running a single stream of transactions such as a bulk load operation. However, its applicability is limited to applications that never execute multiple concurrent transactions. With database-level locking, every transaction effectively runs in ANSI Serializable isolation, since concurrent transactions are disallowed.
To enable database-level locking, do one of the following:
Set the LockLevel
connection attribute to 1.
Call the ttLockLevel
built-in procedure with the lockLevel
parameter set to DS
. This procedure changes the lock level between row-level and database-level locking on the next transaction and for all subsequent transactions for this connection.
Set the LockWait
connection attribute to the maximum amount of time that a statement waits to acquire a lock before it times out. The default is 10 seconds. For more information, see "LockWait" in Oracle TimesTen In-Memory Database Reference.
If a statement within a transaction waits for a lock and the lock wait interval has elapsed, an error is returned. After receiving the error, the application can reissue the statement.
Lock wait intervals are imprecise due to the scheduling of the database's managing subdaemon process to detect lock timeouts. This imprecision does not apply to zero-second timeouts, which are always immediately reported. The lock wait interval does not apply to blocking checkpoints.
The database's managing subdaemon process checks every two seconds to see if there is a deadlock in the database among concurrent transactions. If a deadlock occurs, an error is returned to one of the transactions involved in the deadlock cycle. The transaction that receives the error must rollback in order to allow the other transactions involved in the deadlock to proceed.
A checkpoint operation saves the in-memory image of a database to disk files, known as checkpoint files. By default, TimesTen performs background checkpoints at regular intervals. Checkpointing may generate a large amount of I/O activity and have a long execution time depending on the size of the database and the number of database changes since the most recent checkpoint.
Note:
Applications can programmatically initiate checkpoint operations. See "Setting and managing checkpoints" for more details.Temporary databases do not initiate checkpointing. See "Database persistence" for more information on temporary databases.
The following sections describe checkpoint operations and how you can manage them:
A checkpoint operation has two primary purposes.
Decreases the amount of time required for database recovery, because it provides a more up-to-date database image on which recovery can begin.
Makes a portion of the transaction log unneeded for any future database recovery operation, typically allowing one or more transaction log files to be deleted.
Both of these functions are very important to TimesTen applications. The reduction in recovery time is important, as the amount of a transaction log needed to recover a database has a direct impact on the amount of downtime seen by an application after a system failure. The removal of unneeded transaction log files is important because it frees disk space that can be used for new transaction log files. In addition, the fewer transaction log files you have, the less time is required to load a database into memory. If these files were never removed, they would eventually consume all available space in the transaction log directory's file system, causing database operations to fail due to log space exhaustion.
Each TimesTen database has two checkpoint files, named dsname
.ds0
and dsname
.ds1
, where dsname
is the database path name and file name prefix specified in the database DSN. During a checkpoint operation, TimesTen determines which checkpoint file contains the most recent consistent image and then writes the next in-memory image of the database to the other file. Thus, the two files contain the two most recent database images.
TimesTen uses the most recent consistent checkpoint file and the transaction log to recover the database to its most recent transaction-consistent state after a database shutdown or system failure. If any errors occur during this process, or if the more recent checkpoint image is incomplete, then recovery restarts using the other checkpoint file.
When the database is created, TimesTen creates three transaction log files named dsname
.res0
, dsname
.res1
, and dsname
.res2
. These files contain pre-allocated space that serve as reserved transaction log space. Reserved transaction log space allows for a limited continuation of transaction logging if the file system that holds the transaction log files becomes full. If the file system becomes full, transactions are prevented from writing any new log records. Transactions that attempt to write new log records are forced to rollback.
TimesTen supports two types of database checkpoint operations:
Fuzzy checkpoints, or non-blocking checkpoints, allow transactions to execute against the database while the checkpoint is in progress. Fuzzy checkpoints do not obtain locks of any kind, and therefore have a minimal impact on other database activity. Because transactions may modify the database while a checkpoint operation is in progress, the resulting checkpoint file may contain both committed and uncommitted transactions. Furthermore, different portions of the checkpoint image may reflect different points in time. For example, one portion may have been written before a given transaction committed, while another portion was written afterward. The term "fuzzy checkpoint" derives its name from this fuzzy state of the database image.
To recover the database when the checkpoint files were generated from fuzzy checkpoint operations, TimesTen requires the most recent consistent checkpoint file and the transaction log to bring the database into its most recent transaction-consistent state.
Transaction-consistent checkpoints, also known as blocking checkpoints, obtain an exclusive lock on the database for a portion of the checkpoint operation, blocking all access to the database during that time. The resulting checkpoint image contains all committed transactions prior to the time the checkpoint operations acquired the exclusive lock on the database. Because no transactions can be active while the database lock is held, no modifications made by in-progress transactions are included in the checkpoint image.
TimesTen uses the most recent consistent checkpoint file to recover the database to transaction-consistent state at the time of the last successful checkpoint operation completed. It uses the transaction log files to recover the database to its most recent transaction-consistent state after a database shutdown or system failure.
To request a transaction-consistent checkpoint, an application uses the ttCkptBlocking
built-in procedure. The actual checkpoint is delayed until the requesting transaction commits or rolls back. If a transaction-consistent checkpoint is requested for a database for which both checkpoint files are already up to date then the checkpoint request is ignored.
The default behavior for TimesTen checkpoints is as follows:
TimesTen performs periodic fuzzy checkpoints in the background. You can modify this behavior. See "Configuring or turning off background checkpointing" for more information.
TimesTen performs a transaction-consistent checkpoint operation of a database just before the database is unloaded from memory. See "Transaction-consistent checkpoints".
You can manage and monitor checkpoints with the following connection attributes and built-in procedures:
CkptFrequency
attribute
CkptLogVolume
attribute
CkptRate
attribute
CkptReadThreads
attribute
ttCkpt
built-in procedure
ttCkptBlocking
built-in procedure
ttCkptConfig
built-in procedure
ttCkptHistory
built-in procedure
The following sections describe how to manage checkpointing:
By default, TimesTen performs periodic fuzzy checkpoints in the background. Therefore, applications rarely need to issue manual checkpoints. However, if an application wishes to issue a manual checkpoint, it can call the ttCkpt
built-in procedure to request a fuzzy checkpoint or the ttCkptBlocking
built-in procedure to request a transaction-consistent checkpoint.
Using attributes or built-in procedures, you can configure TimesTen to checkpoint either when the transaction log files contain a certain amount of data or at a specific frequency.
To configure checkpointing in TimesTen, do the following:
Configure the CkptFrequency
and CkptLogVolume
connection attributes as follows:
The CkptFrequency
connection attribute controls how often, in seconds, that TimesTen performs a background checkpoint. The default is 600 seconds. Set the CkptFrequency
connection attribute to 0 if you want to control background checkpointing with the CkptLogVolume
connection attribute.
The CkptLogVolume
connection attribute controls how much data, in megabytes, that collects in the transaction log file between background checkpoints. By increasing this amount, you can delay the frequency of the checkpoint. The default is 0. Set the CkptFrequency
connection attribute to 0 if you want to control background checkpointing with the CkptLogVolume
connection attribute.
To turn off background checkpointing, set both the CkptFrequency
and CkptLogVolume
connection attributes to 0.
Alternatively, you can configure background checkpointing or turn it off by calling the ttCkptConfig
built-in procedure. The values set by ttCkptConfig
take precedence over those set with the connection attributes.
Note:
For information on default values and usage, see "CkptFrequency", "CkptLogVolume", and "ttCkptConfig" in the Oracle TimesTen In-Memory Database Reference.Call the ttCkptHistory
built-in procedure to display the information on the last eight checkpoints. You can monitor the progress of a running checkpoint with the Percent_Complete
column.
By default, there is no limit to the rate at which checkpoint data is written to disk. You can use the CkptRate
attribute or the ttCkptConfig
built-in procedure to set the maximum rate at which background checkpoint data is written to disk. Checkpoints taken during recovery and final checkpoints do not honor this rate; in those situations, the rate is unlimited.
Note:
See "CkptRate" and "ttCkptConfig" in the Oracle TimesTen In-Memory Database Reference for details on using these features.Setting a rate too low can cause checkpoints to take an excessive amount of time and cause the following problems:
Delay the purging of unneeded transaction log files
Delay the start of backup operations
Increase recovery time.
When choosing a rate, you should take into consideration the amount of data written by a typical checkpoint and the amount of time checkpoints usually take. Both of these pieces of information are available through the ttCkptHistory
built-in procedure.
If a running checkpoint appears to be progressing too slowly when you evaluate the progress of this checkpoint with the Percent_Complete
column of the ttCkptHistory
result set, the rate can be increased by calling the ttCkptConfig
built-in procedure. If a call to ttCkptConfig
changes the rate, the new rate takes effect immediately, affecting even the running checkpoint.
Perform the following to calculate the checkpoint rate:
Call the ttCkptHistory
built-in procedure.
For any given checkpoint, subtract the starttime from the endtime.
Divide the number of bytes written by this elapsed time in seconds to get the number of bytes per second.
Divide this number by 1024*1024 to get the number of megabytes per second.
When setting the checkpoint rate, you should consider the following:
The specified checkpoint rate is only approximate. The actual rate of the checkpoint may be below the specified rate, depending on the hardware, system load and other factors.
The above method may underestimate the actual checkpoint rate, because the starttime and endtime interval includes other checkpoint activities in addition to the writing of dirty blocks to the checkpoint file.
The Percent_Complete
field of the ttCkptHistory
call may show 100 percent before the checkpoint is actually complete. The Percent_Complete
field shows only the progress of the writing of dirty blocks and does not include additional bookkeeping at the end of the checkpoint.
When adjusting the checkpoint rate, you may also need to adjust the checkpoint frequency, as a slower rate makes checkpoints take longer, which effectively increases the minimum time between checkpoint beginnings.
By default, TimesTen reads checkpoint files serially with a single thread. Use the CkptReadThreads
connection attribute to set the number of threads that TimesTen uses to read the checkpoint files when loading the database into memory.
When using n
number of threads, TimesTen divides the checkpoint file into n
portions of equal size. Each thread concurrently reads a portion of the file into memory. Once all threads are done reading their portion of the checkpoint file successfully, TimesTen checks the database for consistency.
Note:
For more information, see "Set CkptReadThreads" in this book, and "CkptReadThreads" in the Oracle TimesTen In-Memory Database Reference.TimesTen creates one transaction log for each database, which is shared by all concurrent connections. A transaction log record is created for each database update, commit, and rollback. However, transaction log records are not generated for read-only transactions. Log records are first written to the transaction log buffer, which resides in the same shared memory segment as the database. The contents of the log buffer are then subsequently flushed to the latest transaction log file on disk.
The transaction log is used to track all updates made within a transaction, so that those updates can be undone if the transaction is rolled back.
Transaction logging enables recovery of transactions from checkpoint files and the transaction log, which were committed from the time of the last checkpoint operation after a system failure. If the transaction is non-durable, any committed transactions in the log buffer that have not been flushed to disk would be lost in the event of a system failure.
The following sections describe how to manage and monitor the transaction log buffers and file:
The following describes how to configure transaction log buffers and files:
Transaction log buffers: There is one transaction log buffer for each database and the size of the transaction log buffer can be configured using the LogBufMB
DSN attribute. Each transaction log buffer can have multiple strands. The number of transaction log buffer strands is configured with the LogBufParallelism
attribute.
Transaction log files: The maximum size for the transaction log files are configured with the LogFileSize
DSN attribute. The transaction log files are created in the same directory as the checkpoint files unless the LogDir
attribute specifies a different location. The transaction log file names have the form ds_name
.log
n
. The ds_name
is the database path name that is specified by the DataStore
DSN attribute and is provided within the database's DSN. The suffix n
is the transaction log file number, starting at zero.
Note:
For best performance, TimesTen recommends that applications use theLogDir
attribute to place the transaction log files in a different physical device from the checkpoint files. If separated, I/O operations for checkpoints do not block I/O operations to the transaction log and vice versa.
TimesTen writes a message to the support log if the transaction log files and checkpoint files for your databases are on the same device.
It is important to verify at frequent intervals that there are no transaction log holds that could result in an excessive accumulation of transaction log files. If too many transaction log files accumulate and fill up available disk space, new transactions in the TimesTen database cannot begin until the transaction log hold is advanced and transaction log files are purged by the next checkpoint operation.
The following sections describe transaction log operations, log holds, and accumulation of log files:
Any transaction log file is kept until TimesTen determines it can be purged, which can occur under the following conditions:
Transactions writing log records to the file have been committed or rolled back. These can be either local database transactions or XA transactions.
Changes recorded in the file have been written to both checkpoint files.
Changes recorded in the file have been replicated, if replication is enabled.
Changes recorded in the file have been propagated to the Oracle database, if TimesTen Cache is used and configured for that behavior.
Changes recorded in the file have been reported to XLA, if XLA is used.
Under normal TimesTen operating conditions, unneeded transaction log files are purged each time a checkpoint is initiated. A checkpoint can be initiated either through a configurable time interval with the CkptFrequency
connection attribute, a configurable log volume with the CkptLogVolume
connection attribute, or by calling the ttCkpt
built-in function, which can be called either manually or in a background checkpointing application thread.
If you are running out of disk space because of log files accumulating, use the CkptLogVolume
connection attribute instead of the CkptFrequency
connection attribute. In addition, if you execute the ttLogHolds
build-in procedure frequently, you can tell if log reclamation is blocked.
Note:
To improve performance, locate your log files on a separate disk partition from the one on which the checkpoint files are located. TheLogDir
connection attribute determines where log files are stored. For more information, see "Managing transaction log buffers and files" in this book or "LogDir" in the Oracle TimesTen In-Memory Database Reference.See "Checkpointing" in the Oracle TimesTen Application-Tier Database Cache Introduction for general information. See "Configuring or turning off background checkpointing" for more details on CkptFrequency
and CkptLogVolume
. Also, see the sections for "CkptFrequency", "CkptLogVolume", "ttCkpt", and "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.
Several TimesTen components or operations can cause transaction log holds. A transaction log hold prevents log files, beyond a certain point, from being purged until they are no longer needed. In normal circumstances, the log hold position is regularly advanced and log files are purged appropriately. However, if operations are not functioning properly and the hold position does not advance, there can be an excessive accumulation of log files beyond the hold position that can no longer be purged, which eventually fills available disk space.
These components and operations include the following:
Replication: There is a transaction log hold until the transmitting replication agent confirms that the log files have been fully processed by the receiving host.
Possible failure modes include the following:
The network is down or there is a standby crash and replication is unable to deliver data to one or more subscribers. If necessary, the application can direct that logs no longer be held, then duplicate the master database to the standby when normal operations resume. Criteria for when to do this includes the amount of time required to duplicate, the amount of available disk space on the master for log files, and the transaction log growth rate.
The overall database transaction rate exceeds the ability of replication to keep the active and standby databases synchronized. An application can reduce the application transaction rate or the number of replicated tables.
For more information, see "Improving Replication Performance" in the Oracle TimesTen In-Memory Database Replication Guide and "Troubleshooting Replication" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.
XLA: There is a transaction log hold until the XLA bookmark advances.
A possible failure mode occurs when the bookmark becomes stuck, which can occur if an XLA application terminates unexpectedly or if it disconnects without first deleting its bookmark or disabling change-tracking. If a bookmark gets too far behind, the application can delete it. If the XLA reader process is still active, it must first be terminated, so that another XLA process can connect and delete the bookmark.
Active standby pairs that replicate AWT cache groups: There is a transaction log hold until the replication agent confirms that the transaction corresponding to the log hold has been committed on the Oracle Database. With an active standby pair, the active database typically receives the confirmation from the standby database. If the standby database is down, the replication agent receives confirmation from Oracle Database directly.
Possible failure modes include the following:
Oracle Database is down or there is a lock or resource contention.
The network is down, slow, or saturated.
With an active standby pair, replication to the standby database falls behind. Check log holds on the standby database.
The transaction rate to TimesTen exceeds the maximum sustainable rate that TimesTen can propagate to Oracle Database.
For more information, see "Monitoring AWT cache groups" in the Oracle TimesTen Application-Tier Database Cache User's Guide and "Troubleshooting AWT Cache Groups" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.
Cache groups configured with AUTOREFRESH
: There is a transaction log hold until the replication agent on the active database confirms the log files have been fully processed by the standby database.
Possible failure modes include the following:
Replication from the active database to the standby database is impacted because the standby database falls behind due to large workloads resulting from AUTOREFRESH
mode.
The standby database is down or recovering, but has not been marked as FAILED
through a call, initiated by either the user application or Oracle Clusterware, to the ttRepStateSave
built-in procedure. The active database does not take over propagation to the Oracle Database until the state of the standby database is marked as FAILED
. While the standby database is down or recovering, transaction log files are held for the Oracle Database.
For more information, see "Monitoring autorefresh cache groups" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.
Incremental TimesTen backup: There is a transaction log hold until the backup completes.
A possible failure mode can occur if the incremental backup falls too far behind the most recent entries in the transaction log. For example, ensure that an unexpected burst of transaction activity cannot fill up available transaction log disk space due to the backup holding a log file that is too old. An application can perform another incremental backup to work around this situation.
Long-running transaction or XA transaction: There is a transaction log hold until the transaction completes.
A possible failure mode can occur if an application transaction does not commit or roll back for a long time, so that it becomes necessary for the application to terminate the long-running transaction.
If necessary, you can roll back a transaction using the ttXactAdmin
utility with the -xactIdRollback
option. See "ttXactAdmin" in Oracle TimesTen In-Memory Database Reference.
Options for periodic monitoring of excessive transaction log accumulation include the following:
Call the ttLogHolds
built-in procedure, which returns a result set with details of all log holds. The information includes the following, as applicable:
Log file number, the offset of the hold position, and the type of hold, which can be checkpoint, replication, backup, XLA, long-running transaction, or long-running XA transaction
Name of the checkpoint file for a checkpoint hold
Name of the subscriber and the parallel track ID it uses for replication
Backup path for a backup hold
Name of the persistent subscription and process ID of the last process to open it for XLA
Transaction ID for a long-running transaction
XA XID for a long-running XA transaction
For more information, see "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.
Call the ttCkptHistory
built-in procedure to check the last several checkpoints to confirm none of the returned rows has a status of FAILED
.
For more information, see "ttCkptHistory" in the Oracle TimesTen In-Memory Database Reference.
Check the SYS.SYSTEMSTATS
table for operational metrics. Each transaction log file has a unique sequence number, which starts at 0 for the first log file and increments by 1 for each subsequent log file. The number of the current log file is available in SYS.SYSTEMSTATS.log.file.latest
. The number of the oldest log file not yet purged is available in SYS.SYSTEMSTATS.log.file.earliest
. You should raise an error or warning if the difference in the sequence numbers exceeds an inappropriate threshold.
For more information, see "SYS.SYSTEMSTATS" in the Oracle TimesTen In-Memory Database System Tables and Views Reference.
For XLA, check the SYS.TRANSACTION_LOG_API
table that provides bookmark information, such as the process ID of the connected application, which could help diagnose the reason why a bookmark may be stuck or lagging.
For more information, see "SYS.TRANSACTION_LOG_API" in the Oracle TimesTen In-Memory Database System Tables and Views Reference.
The following sections describe durability options for logging transactions:
Durability is implemented with a combination of checkpointing and logging.
Checkpoint files: A checkpoint operation writes the current database image to a checkpoint file on disk, which has the effect of making all transactions that committed before the checkpoint durable.
Transaction log files: For transactions that committed after the last checkpoint, TimesTen uses conventional logging techniques to make them durable. As each transaction progresses, it records its database modifications in an in-memory transaction log. At commit time, the relevant portion of the transaction log is flushed to disk. This log flush operation makes that transaction, and all previously-committed transactions, durable.
Control returns to the application after the transaction log data has been durably written to disk. A durably committed transaction is not lost even in the event of a system failure.
To enable guaranteed durability, set the DurableCommits
attribute to 1.
Any recovery uses the last checkpoint image together with the transaction log to reconstruct the latest transaction-consistent state of the database.
Note:
Committing a transaction durably makes that transaction and all previous transactions durable. Any non-durable transactions are no longer subject to loss in the event of a database failure, just as if it had originally been committed durably.If most of your transactions commit durably, you may want to set the LogFlushMethod
first connect attribute to 2. This connection attribute configures how TimesTen writes and synchronizes log data to transaction log files. For more information, see "Use durable commits appropriately".
In delayed durability mode, as in guaranteed durability mode, each transaction enters records into the in-memory transaction log as it makes modifications to the database. However, when a transaction commits in delayed durability mode, it does not wait for the transaction log to be posted to disk before returning control to the application. Thus, a non-durable transaction may be lost in the event of a database failure. However, they execute considerably faster than durable transactions. Eventually, transactions are flushed to disk by the database's subdaemon process or when the in-memory log buffer is full.
Applications request delayed durability mode by setting the DurableCommits
attribute to 0. This is the default and the recommended option. Connections that use delayed durability can coexist with connections that use guaranteed durability.
Applications that wish to take advantage of the performance benefits of delayed durability mode, but can only tolerate the loss of a small number of transactions, can perform periodic durable commits in a background process. Only those transactions that committed non-durably after the last durable commit are vulnerable to loss in the event of a system failure.
The performance cost for durable commits can be reduced with a group commit of multiple concurrently executing transactions. Many threads executing at the same time, if they are short transactions, may commit at almost the same time. Then, a single disk write commits a group of concurrent transactions durably. Group commit does not improve the response time of any given commit operation, as each durable commit must wait for a disk write to complete, but it can significantly improve the throughput of a series of concurrent transactions.
When durable commits are used frequently, TimesTen can support more connections than there are CPUs, as long as transactions are short. Each connection spends more time waiting to commit than it spends using the CPU. Alternatively, applications that perform infrequent durable commits cause each connection to be very CPU-intensive for the TimesTen portion of its workload.
Applications that do not require optimal response time and can tolerate some transaction loss may elect to perform periodic durable commits. This maintains a smaller window of vulnerability to transaction loss as opposed to all transactions being committed non-durably. By committing only every nth transaction durably or performing a durable commit every n seconds, an application can achieve a quicker response time while maintaining a small window of vulnerability to transaction loss. A user can elect to perform a durable commit of a critical transaction, such as one that deals with financial exchange, that cannot be vulnerable to loss.
To enable periodic durable commits, an application does the following:
Connects with setting the attribute DurableCommits=0
. This causes the transactions to commit non-durably.
When a durable commit is needed, the application can call the ttDurableCommit
built-in procedure before committing. The ttDurableCommit
built-in procedure does not actually commit the transaction; it merely causes the commit to be durable when it occurs.
After a transaction is marked by TimesTen as committed, there is a reclaim phase of the commit during which database resources are reclaimed. This section discusses these reclaim operations, covering the following topics:
TimesTen resource cleanup occurs during the reclaim phase of a transaction commit. Consider a transaction with DELETE
operations, for example. The SQL operation marks the deleted rows as deleted, but the space and resources occupied by these rows are not actually freed until the reclaim phase of the transaction commit.
During reclaim, TimesTen reexamines all the transaction log records starting from the beginning of the transaction to determine the reclaim operations that must be performed, then performs those operations.
To improve performance, a number of transaction log records can be cached to reduce the need to access the transaction log on disk. This cache is referred to as the commit buffer and its size is configurable, as described in the next section, "Configuring the commit buffer for reclaim operations".
Notes:
Once the reclaim phase has begun, the transaction is considered to be committed and can no longer be rolled back.
If a process is terminated during the reclaim phase, the cleanup operation will complete the reclaim.
The reclaim phase of a large transaction commit results in a large amount of processing and is very resource intensive. (For this reason, smaller transactions are generally recommended.) You can improve performance, however, by increasing the maximum size of the commit buffer, which is the cache of transaction log records used during reclaim operations.
You can use the TimesTen CommitBufferSizeMax
connection attribute to specify the maximum size of the commit buffer, in megabytes. This setting has the scope of your current session. For efficiency, initial memory allocation will be significantly less than the maximum, but will automatically increase as needed in order to fit all the relevant log records into the commit buffer, until the allocation reaches the maximum. The allocation is then reduced back to the initial allocation after each reclaim phase. By default, the maximum is 128 KB with an initial allocation of 16 KB. (Also see "CommitBufferSizeMax" in Oracle TimesTen In-Memory Database Reference.)
Be aware that an increase in the maximum size of the commit buffer may result in a corresponding increase in temporary space consumption. There is no particular limit to the maximum size you can specify, aside from the maximum value of an integer, but exceeding the available temporary space will result in an error.
Note the following related features:
During the course of a session, you can use ALTER SESSION
to change the maximum size of the commit buffer as follows, where n
is the desired maximum, in megabytes. (Also see "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference.)
ALTER SESSION SET COMMIT_BUFFER_SIZE_MAX = n
You can use the ttCommitBufferStats
built-in procedure to gather statistics for your connection to help you tune the commit buffer maximum size. This built-in takes no parameters and returns the total number of commit buffer overflows and the highest amount of memory used by reclaim operations for transaction log records, in bytes. If there are buffer overflows, you may consider increasing the commit buffer maximum size. If there are no overflows and the highest amount of memory usage is well under the commit buffer maximum size, you may consider decreasing the maximum size.
The ttCommitBufferStatsReset
built-in procedure resets these statistics to 0 (zero). This is useful, for example, if you have set a new value for the commit buffer maximum size and want to restart the statistics.
(Also see "ttCommitBufferStats" and "ttCommitBufferStatsReset" in Oracle TimesTen In-Memory Database Reference.)
The system-wide number of commit buffer overflows is also recorded in the TimesTen statistic txn.commits.buf.overflowed
in the SYS.SYSTEMSTATS
table. (Also see "SYS.SYSTEMSTATS" in Oracle TimesTen In-Memory Database System Tables and Views Reference.)
You can check the current setting of CommitBufferSizeMax
by calling the ttConfiguration
built-in procedure.
If a database becomes invalid or corrupted by a system or process failure, every connection to the database is invalidated. When an application reconnects to a failed database, the subdaemon allocates a new memory segment for the database and recovers its data from the checkpoint and transaction log files.
During recovery, the latest checkpoint file is read into memory. All transactions that have been committed since the last checkpoint and whose log records are on disk are rolled forward from the appropriate transaction log files. Note that such transactions include all transactions that were committed durably as well as all transactions whose log records aged out of the in-memory log buffer. Uncommitted or rolled-back transactions are not recovered. For details on checkpoint and transaction log files, see "Checkpoint operations" and "Transaction logging".