Oracle9i Database Administrator's Guide
Release 1 (9.0.1)

Part Number A90117-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

9
Using LogMiner to Analyze Redo Log Files

The Oracle LogMiner utility enables you to query redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.

This chapter discusses the following topics:

This chapter describes LogMiner functionality as it is used from the command line. You also have the option of accessing LogMiner functionality through the Oracle LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of Oracle Enterprise Manager.

Understanding the Value of Analyzing Redo Log Files

Oracle redo log files contain every change made to user data and to the data dictionary in a database. Therefore, redo log files are the only source that contains all the necessary information to perform recovery operations. Because redo log data is often kept in archived files, the data is already available. There should be no additional operations needed to obtain the data that LogMiner uses.

The following are some of the potential uses for data contained in redo log files:

Things to Know Before You Begin

Before you begin using LogMiner, it is important to understand how LogMiner works with redo log files and dictionary files. This will help you in getting accurate results and in planning the use of your system resources. The following concepts are discussed in this section:

After you read this section, see "Using LogMiner" for the specific steps involved in using LogMiner.

Redo Log Files

When you run LogMiner, you specify the names of redo log files that you want to analyze. LogMiner retrieves information from those redo log files and returns it through the V$LOGMNR_CONTENTS view.

You can then use SQL to query the V$LOGMNR_CONTENTS view, as you would any other view. Each select operation that you perform against the V$LOGMNR_CONTENTS view causes the redo log files to be read sequentially.

Keep the following things in mind about redo log files:

It is important to specify the correct redo log files when running LogMiner. If you omit redo log files that contain some of the data you need, you will get inaccurate results when you query V$LOGMNR_CONTENTS.

To determine which redo log files are being analyzed, you can look at the V$LOGMNR_LOGS view, which contains one row for each log file.

See "Specifying Redo Log Files for Analysis" for more information.

Dictionary Options

To fully translate the contents of redo log files, LogMiner requires access to a database dictionary.

LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as hex bytes.

For example, instead of the SQL statement:

INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);

LogMiner will display:

insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'), 
hextoraw('c306'));"

LogMiner gives you three choices for your source dictionary: extracting dictionary data to a flat file, extracting dictionary data to redo log files, or using the online catalog (the dictionary currently in use for the database).

Extracting the Dictionary to a Flat File or to Redo Log Files

A LogMiner dictionary file contains information that identifies the database it was created from and the time it was created. This information is used to validate the dictionary against the selected redo log files.

The dictionary file must have the same database character set and be created from the same database as the log files being analyzed. In general, the analysis of redo log files requires a dictionary that was generated from the same database that generated the redo log files.

The DBMS_LOGMNR_D.BUILD procedure allows you to extract the dictionary to a flat file or to the redo log files.

Extracting the Dictionary to a Flat File

While the data dictionary is being extracted to a flat file, DDL statements can be issued by other users. Therefore, there is a possibility that the extracted file may not contain a consistent snapshot of the data dictionary.

When the dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files.

It is recommended that you regularly back up the dictionary extracts to ensure correct analysis of older redo log files.

Extracting the Dictionary to the Redo Log Files

While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary snapshot is guaranteed to be consistent.

The process of extracting the dictionary to the redo log files does consume database resources, but if you limit the extraction to off-peak hours, this should not be a problem and it is faster than extracting to a flat file. Depending on the size of the dictionary, it may be contained in multiple redo log files.

It is recommended that you periodically back up the redo log files so that the information is saved and available at a later date. Ideally, this will not involve any extra steps because if your database is being properly managed, there should already be a process in place for backing up and restoring archived redo log files. Again, because of the time required, it is good practice to do this during off-peak hours.

See "Extracting a Dictionary" for more information about extracting a dictionary using one of these options.

Using the Online Catalog

To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner.

The online catalog contains the latest information about the database. However, the online catalog may not be correct in relation to the redo log files you are analyzing if it has changed significantly since the redo log files were generated.

See "Starting LogMiner" for information about specifying the online catalog by using the DICT_FROM_ONLINE_CATALOG option.

Tracking of DDL Statements

LogMiner automatically builds its own internal dictionary from the source dictionary that you specify at startup (either a flat file dictionary, a dictionary in the redo logs, or an online catalog).

If your source dictionary is a flat file dictionary or a dictionary in the redo log files, you can use the DDL_DICT_TRACKING option to direct LogMiner to track data definition language (DDL) statements. With this option set, LogMiner applies any DDL statements seen in the redo logs to its internal dictionary. The updated information is then returned in the SQL_REDO column of the V$LOGMNR_CONTENTS view.

The ability to track DDL statements helps you monitor schema evolution because changes in the logical structure of a table (because of DDL operations such as adding or dropping of columns) can be handled. In addition, data manipulation language (DML) operations performed on new tables created after the dictionary was extracted are also shown.


Note:

It is important to understand that the LogMiner internal dictionary is not the same as the LogMiner dictionary contained in a flat file or in redo log files. LogMiner does update its internal dictionary, but it does not update the dictionary that is contained in a flat file or in redo log files. 


The DDL_DICT_TRACKING option is best used when a single pass is to be made through the V$LOGMNR_CONTENTS view. If multiple passes are to be made, keep the following considerations in mind:

For more information about these options, see Starting LogMiner.

Storage Management

In Oracle9i, LogMiner may use the database to store backup data when the data dictionary snapshot is read from the redo log files or when the data dictionary is read from a flat file and DDL_DICT_TRACKING is specified.

The data dictionary snapshot is read from the redo log files into tables in the SYSTEM schema. Therefore, Oracle recommends that you extend the SYSTEM tablespace by adding new data files to it (using the ALTER TABLESPACE ADD DATAFILE statement).

Extracting Data Values from Redo Log Files

LogMiner enables you to make queries based on actual data values. For instance, you could issue a query to select all updates to the table scott.emp or all deletions performed by user scott. You could also perform a query to show all updates to scott.emp that increased sal more than a certain amount. Data such as this can be used to analyze system behavior and to perform auditing tasks.

LogMiner data extraction from redo log files is performed using the mine functions, DBMS_LOGMNR.MINE_VALUE and COLUMN_PRESENT. These functions are part of the DBMS_LOGMNR package. See Oracle9i Supplied PL/SQL Packages and Types Reference for details.

LogMiner Restrictions

The following restrictions apply:

LogMiner Views

LogMiner provides the following views. You can use SQL to query them as you would any other view.

Using LogMiner

To run LogMiner, you use two PL/SQL packages:

The LogMiner packages are owned by the SYS schema. Therefore, if you are not connected as user SYS, you must include SYS in your call. For example:

EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR

See Also:

 

The rest of this section describes the steps in a typical LogMiner session. Each step is described in its own subsection.

  1. Extracting a Dictionary

  2. Specifying Redo Log Files for Analysis

  3. Starting LogMiner

  4. Analyzing Output from V$LOGMNR_CONTENTS

  5. Using LogMiner to Perform Object-Level Recovery

  6. Ending a LogMiner Session

Extracting a Dictionary

To use LogMiner you must supply it with a dictionary by doing one of the following:

Extracting the Dictionary to a Flat File

To extract database dictionary information to a flat file, use the DBMS_LOGMNR_D.BUILD procedure.

DBMS_LOGMNR_D.BUILD requires access to a directory where it can place the dictionary file. Because PL/SQL procedures do not normally access user directories, you must specify a directory for use by the DBMS_LOGMNR_D.BUILD procedure or the procedure will fail. To specify a directory, set the initialization parameter, UTL_FILE_DIR, in the init.ora file.

For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the init.ora file:

UTL_FILE_DIR = /oracle/database


For the changes to the init.ora file to take effect, you must stop and restart the database.

See Also:

Oracle9i Database Reference for more information about the init.ora file 

To ensure that the DBMS_LOGMNR_D.BUILD procedure completes successfully, be sure to specify the necessary parameters, as shown in the following examples. Also be sure that no DDL operations occur while the dictionary is being built.

The steps for extracting a dictionary to a flat file depend on whether you are creating it for an Oracle9i database or an Oracle8 database.

Create a Dictionary Flat File for an Oracle9i Database

  1. To specify where the dictionary file should be placed, set the UTL_FILE_DIR parameter in the init.ora file.

    For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the init.ora file:

    UTL_FILE_DIR = /oracle/database
    
    

    Remember that for the changes to the init.ora file to take effect, you must stop and restart the database.

  2. If the database is closed, use SQL*Plus to mount and then open the database whose redo log files you want to analyze. For example, entering the STARTUP command mounts and opens the database:

    SQLPLUS> STARTUP
    
    
  3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary files. For example, enter the following to create the file dictionary.ora in /oracle/database:

    SQLPLUS>EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora',  
    SQLPLUS>'/oracle/database/', 
    SQLPLUS>options => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
    
    

    You could also specify a filename and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.

Create a Dictionary Flat File for an Oracle8 Database

Although LogMiner only runs on databases of release 8.1 or higher, you can use it to analyze redo log files from release 8.0 databases. However, the LogMiner functionality available when analyzing a log file depends on the log file version. That is, log files for Oracle9i have been augmented to take advantage of LogMiner functionality, so log files created with older releases of Oracle may have limitations on the operations and datatypes supported by LogMiner.

  1. Use your operating system's copy command to copy the dbmslmd.sql script, which is contained in the $ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same directory in the Oracle8 database. For example, enter:

    % cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
    
    
    
  2. If the database is closed, use SQL*Plus to mount and then open the database whose files you want to analyze. For example, enter:

    SQLPLUS> STARTUP
    
    
    
  3. Execute the copied dbmslmd.sql script on the 8.0 database to create the DBMS_LOGMNR_D package. For example, enter:

    @dbmslmd.sql
    
    

    You may need to enter the complete path to the script.

  4. To specify where the dictionary file should be placed, set the UTL_FILE_DIR parameter in the init.ora file.

    For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the init.ora file:

    UTL_FILE_DIR = /oracle/database
    
    

    Remember that for the changes to the init.ora file to take effect, you must stop and restart the database.

  5. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. This procedure creates the dictionary files. Specify both a filename and a directory path name for the dictionary file. For example, enter the following to create file dictionary.ora in /oracle/database:

    SQLPLUS>EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora',  
    SQLPLUS>'/oracle/database/', 
    SQLPLUS>options => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
    
    
    

    You could also specify a filename and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.

Extracting a Dictionary to the Redo Log Files

To extract a dictionary to the redo log files, the database must be in ARCHIVELOG mode.

If you try to specify a filename and location when using the STORE_IN_REDO_LOGS option, an error is returned.

SQLPLUS>EXECUTE DBMS_LOGMNR_D.BUILD (
SQLPLUS>options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

See Also:

Oracle9i Recovery Manager User's Guide for more information on ARCHIVELOG mode 

Specifying Redo Log Files for Analysis

To specify the redo log files that you want to analyze, execute the DBMS_LOGMNR.ADD_LOGFILE procedure, as demonstrated in the following steps. You can add and remove log files in any order.

  1. Use SQL*Plus to start an Oracle instance, with the database either mounted or unmounted. For example, enter:

    STARTUP
    
    
    
  2. Create a list of redo log files by specifying the NEW option of the DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to specify /oracle/logs/log1.f:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE( 
    LOGFILENAME => '/oracle/logs/log1.f',
    OPTIONS => DBMS_LOGMNR.NEW);
    
    
    
  3. If desired, add more redo log files by specifying the ADDFILE option of the DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to add /oracle/logs/log2.f:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
    LOGFILENAME => '/oracle/logs/log2.f',
    OPTIONS => DBMS_LOGMNR.ADDFILE);
    
    
    
  4. If desired, remove redo log files by specifying the REMOVEFILE option of the DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to remove /oracle/logs/log2.f:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
    LOGFILENAME => '/oracle/logs/log2.f',
    OPTIONS => DBMS_LOGMNR.REMOVEFILE);
    

Starting LogMiner

After you have create a dictionary file and specify which redo log files to analyze, you can start LogMiner and begin your analysis. Take the following steps:

  1. Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.

    It is recommended that you specify a dictionary option. If you do not, LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would return internal object IDs and present data as hex bytes.

    Note that if you are specifying the name of a flat file dictionary, you must supply a fully qualified filename for the dictionary file. For example, to start LogMiner using /oracle/database/dictionary.ora, issue the following command:

    EXECUTE DBMS_LOGMNR.START_LOGMNR(
    DICTFILENAME =>'/oracle/database/dictionary.ora');
    
    
    

    If you are not specifying a flat file dictionary name, then specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG option.

  2. Optionally, set the startTime and endTime parameters to filter data by time. The procedure expects date values. Use the TO_DATE function to specify date and time, as in this example:

    EXECUTE DBMS_LOGMNR.START_LOGMNR(
    DICTFILENAME => '/oracle/dictionary.ora',
    STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS')
    ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS')); 
    

    The timestamps should not be used to infer ordering of redo records. You can infer the order of redo records by using the SCN.

  3. Instead of specifying a start time and end time, you can use the startScn and endScn parameters to filter data by SCN, as in this example:

    EXECUTE DBMS_LOGMNR.START_LOGMNR(
    DICTFILENAME => '/oracle/dictionary.ora',
    STARTSCN => 100,
    ENDSCN => 150);
    
    
    

    The startScn and endScn parameters override the startTime and endTime parameters in situations where all are specified.

    If no start or end parameters are specified, the entire log file is read from start to end, for each SELECT statement issued.

  4. If desired, you can also use the OPTIONS parameter to set the following session characteristics:

    COMMITTED_DATA_ONLY

    Only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS view. This enables you to filter out rolled back transactions and transactions that are in progress.

    If long-running transactions are present in the redo log files being analyzed, use of this option may cause an "Out of Memory" error.

    The default is for LogMiner to show rows corresponding to all of the transactions.

    SKIP_CORRUPTION

    Any corruptions in the redo log files are skipped during select operations from the V$LOGMNR_CONTENTS view. Rows that are retrieved after the corruption are flagged with a "Log File Corruption Encountered" message. Additionally, for every corrupt redo record encountered, an informational row is returned that indicates how many blocks were skipped.

    The default is for the select operation to terminate at the first corruption it encounters in the log file.

    DDL_DICT_TRACKING

    If the dictionary in use is either a flat file or in the redo log files, LogMiner ensures that its internal dictionary is updated if a DDL event is found in the redo log files. This ensures that SQL_REDO and SQL_UNDO information is correct for objects that are modified in the redo log files after the LogMiner internal dictionary was built.

    The default is for this option to be disabled.

    This option is not valid with the DICT_FROM_ONLINE_CATALOG option.

    NO_DICT_RESET_ONSELECT

    This option is only valid if the DDL_DICT_TRACKING option is also specified. It prevents LogMiner from reloading its dictionary at the beginning of each select operation on the V$LOGMNR_CONTENTS view. This can be an advantage because it can be time consuming to refresh the dictionary if a DDL operation has updated the internal LogMiner dictionary. However, you should be aware that if you use this option, you may get incorrect SQL_REDO and SQL_UNDO information for objects that are modified in the redo log files because the dictionary has not been refreshed.

    The NO_DICT_RESET_ONSELECT option should not be specified if you want complete reconstructed SQL statements returned from subsequent selects.

    The following example shows how LogMiner behaves when the NO_DICT_RESET_ONSELECT and DDL_DICT_TRACKING options are specified.

    1. Start LogMiner with the NO_DICT_RESET_ONSELECT and DDL_DICT_TRACKING options specified, as follows:

      execute DBMS_LOGMNR.START_LOGMNR(OPTIONS =>
      DBMS_LOGMNR.DDL_DICT_TRACKING +
      DBMS_LOGMNR.NO_DICT_RESET_ONSELECT +
      DBMS_LOGMNR.DICT_FROM_REDO_LOGS);
      
      
    2. Issue the following SQL query:

      SELECT sql_redo FROM SYS.V$LOGMNR_CONTENTS;
      
      
    3. The SQL_REDO that is returned looks as follows:

      SQL_REDO
      ----------------------------------------------------------------------------
      create table scott.customer(name varchar2(32), phone_day varchar2(20),
      phone_evening varchar2(20))
      
      insert into "SCOTT"."CUSTOMER"("NAME","PHONE_DAY","PHONE_EVENING")
      values ('Nadine Gordimer','847-123-1234','415-123-1234')
      
      insert into "SCOTT"."CUSTOMER"("NAME","PHONE_DAY","PHONE_EVENING")
      values ('Saul Bellow','847-123-1234','415-123-1234');
      
      commit;
      
      alter table scott.customer drop (phone_evening)
      
      insert into "SCOTT"."CUSTOMER"("NAME","PHONE_DAY") values ('Gabriel
      Garcia Marquez','044-1270-123-1234');
      commit;
      
      

      The SELECT statement correctly applied the CREATE TABLE and ALTER TABLE statements to LogMiner's internal dictionary and reconstructed valid SQL_REDO statements.

      At the end of this select operation, the definition of the table scott.customer contained in the internal dictionary has only two columns because the ALTER TABLE statement dropped the phone_evening column.

    4. Issue the same SQL query again that you issued earlier:

      SELECT sql_redo FROM SYS.V$LOGMNR_CONTENTS;
      
      
    5. The SQL_REDO that is returned looks as follows:

      SQL_REDO
      ----------------------------------------------------------------------------
      create table scott.customer(name varchar2(32), phone_day varchar2(20),
      phone_evening varchar2(20))
      insert into "SCOTT"."CUSTOMER"("COL 1","COL 2","COL 3") values
      (HEXTORAW('4e6164696e6520476f7264696d6572'),HEXTORAW('3834372d3132332d313233
      34'),
      
      HEXTORAW('3431352d3132332d31323334'));
      
      insert into "SCOTT"."CUSTOMER"("COL 1","COL 2","COL 3") values
      (HEXTORAW('5361756c2042656c6c6f77'),HEXTORAW('3834372d3132332d31323334'),
      
      HEXTORAW('3431352d3132332d31323334'));
      
      commit;
      
      alter table scott.customer drop (phone_evening)
      
      insert into "SCOTT"."CUSTOMER"("NAME","PHONE_DAY") values ('Gabriel
      Garcia Marquez','044-1270-123-1234');
      
      commit;
      
      

      Because NO_DICT_RESET_ONSELECT was specified when LogMiner was started, LogMiner does not reload its dictionary when this second SELECT statement is executed. Therefore, the updated dictionary is not used to translate the redo stream and this SELECT operation cannot fully translate the first two INSERT statements. Instead, is shows the SQL_REDO for them as hex bytes. However, the third INSERT statement can be fully translated because it corresponds to the definition of scott.customer table in the LogMiner dictionary.

      Thus, the NO_DICT_RESET_ONSELECT option should not be specified if you want the complete reconstructed SQL statements returned from subsequent selects.

    DICT_FROM_ONLINE_CATALOG

    If set, LogMiner uses the dictionary currently in use for the database. This option is not valid with the DDL_DICT_TRACKING option.

    DICT_FROM_REDO_LOGS

    If set, LogMiner expects to find a dictionary in the redo log files that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure.

    To see which redo log files contain a dictionary, look at the V$LOGMNR_LOGS view.

    If you want to analyze redo log files for a specific period, you must use a dictionary that is consistent back to the beginning of that period. It is important to realize that if you have performed any DDL operations such as dropping columns or tables, the dictionary may not be synchronized with data in redo log files that were created before those DDL operations.

Analyzing Output from V$LOGMNR_CONTENTS

LogMiner output is contained in the V$LOGMNR_CONTENTS view. After LogMiner is started, you can issue SQL statements at the command line to query the data contained in V$LOGMNR_CONTENTS.

When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially. Translated records from the redo log files are returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup (endTime or endScn) are met or the end of the log file is reached.

LogMiner returns all of the rows in SCN order unless you have used the COMMITTED_DATA_ONLY option to specify that only committed transactions should be retrieved.

SCN order is the order normally applied in media recovery.

The following sample query returns information about operations:

SELECT operation, sql_redo FROM V$LOGMNR_CONTENTS;
OPERATION SQL_REDO               
--------- ----------------------------------------------------------
INTERNAL      
INTERNAL      
START     set transaction read write;  
UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
COMMIT    commit;                                                                         
START     set transaction read write;                    
UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
COMMIT    commit;                           
START     set transaction read write;     
UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
COMMIT    commit;                                                                         
11 rows selected.

Using LogMiner to Perform Object-Level Recovery

LogMiner processes redo log files, translating their contents into SQL statements that represent the logical operations performed on the database. The V$LOGMNR_CONTENTS view then lists the reconstructed SQL statements that represent the original operations (SQL_REDO column) and the corresponding SQL statement to undo the operations (SQL_UNDO column).

Provided you fully understand and take into account existing dependencies, you may be able to apply the SQL_UNDO statements to roll back the original changes to the database.

Ending a LogMiner Session

To properly end a LogMiner session, use the DBMS_LOGMNR.END_LOGMNR procedure, as follows:

EXECUTE DBMS_LOGMNR.END_LOGMNR;

This procedure closes all of the log files and allows all of the database and system resources allocated by LogMiner to be released.

If this procedure is not executed, LogMiner retains all of its allocated resources until the end of the Oracle session in which it was invoked. It is particularly important to use this procedure to end LogMiner if either the DDL_DICT_TRACKING option or the DICT_FROM_REDO_LOGS option was used.

Example Uses of LogMiner

This section provides the following example uses of LogMiner.

Example: Tracking Changes Made By a Specific User

In this example, you are interested in seeing all of the changes to the database in a specific time range by one of your users: joedevo.You perform this operation in the following steps:

Step 1: Creating the Dictionary File

To use LogMiner to analyze joedevo's data, you must create a dictionary file before starting LogMiner. Take the following steps:

  1. In the init.ora file, set the initialization parameter UTL_FILE_DIR to /user/local/dbs:

    UTL_FILE_DIR = /user/local/dbs
    
    
    
  2. Start SQL*Plus and then connect to the database:

    CONNECT SYSTEM/password
    
    
  3. Open the database to create the dictionary file:

    STARTUP
    
    
    
  4. Name the dictionary orcldict.ora and place it in the directory /user/local/dbs:

    EXECUTE DBMS_LOGMNR_D.BUILD(
    DICTIONARY_FILENAME => 'orcldict.ora',
    DICTIONARY_LOCATION => '/usr/local/dbs');
    
    
    
  5. The dictionary was created and can be used later. You can shut down the database:

    SHUTDOWN;
    
Step 2: Adding Redo Log Files and Limiting the Search Range

Now that the dictionary is created, you decide to view the changes that happened at a specific time. Take the following steps:

  1. Start SQL*Plus, connect as SYSTEM, then start the instance:

    CONNECT SYSTEM/password
    STARTUP NOMOUNT
    
    
    
  2. Supply the list of logfiles to be analyzed. The Options flag is set to indicate this is a new list:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
    LOGFILENAME => 'log1orc1.ora',
    OPTIONS => DBMS_LOGMNR.NEW);
    
    
  3. Add a file to the existing list. The OPTIONS flag is set to indicate that you are adding a file to the existing list:

    EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
    LOGFILENAME => 'log2orc1.ora',
    OPTIONS => DBMS_LOGMNR.ADDFILE);
    
Step 3: Starting LogMiner and Analyzing the Data

At this point, the V$LOGMNR_CONTENTS view is available for queries. You decide to find all of the changes made by user joedevo to the salary table. You discover that joedevo requested two operations: he deleted his old salary and then inserted a new, higher salary. You now have the data necessary to undo this operation. Take the following steps:

  1. Start LogMiner and limit the search to the specified time range:

    EXECUTE DBMS_LOGMNR.START_LOGMNR(
    DICTFILENAME => 'orcldict.ora',
    STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS')
    ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS')); 
    
    
  2. Query the V$LOGMNR_CONTENTS view to see the results of your search:

    SELECT sql_redo, sql_undo FROM V$LOGMNR_CONTENTS 
    WHERE USERNAME = 'joedevo' AND TABLENAME = 'salary';
    
    
    
  3. For both the SQL_REDO and SQL_UNDO columns, two rows are returned (the format of the data display will be different on your screen):

    
    SQL_REDO                                 SQL_UNDO
    --------                                 --------
    delete * from SALARY                     insert into SALARY(NAME,EMPNO, SAL)
    where EMPNO = 12345                      values ('JOEDEVO', 12345,500)
    and ROWID = 'AAABOOAABAAEPCABA';
    
    insert into SALARY(NAME, EMPNO, SAL)     delete * from SALARY
    values('JOEDEVO',12345,2500)             where EMPNO = 12345
                                             and ROWID = 'AAABOOAABAAEPCABA';
    2 rows selected
    

Example: Calculating Table Access Statistics

In this example, you manage a direct marketing database and want to determine how productive the customer contacts have been in generating revenue for a two week period in August. Assume that you already created the dictionary and added the redo log files you want to search. Take the following steps:

  1. Start LogMiner and specify a range of times:

    EXECUTE DBMS_LOGMNR.START_LOGMNR(
    STARTTIME => TO_DATE('07-Aug-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS')
    ENDTIME => TO_DATE('15-Aug-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'),
    DICTFILENAME => '/usr/local/dict.ora');
    
    
    
  2. Query the V$LOGMNR_CONTENTS view to determine which tables were modified in the time range you specified, as shown in the following example. (This query filters out system tables that traditionally have a $ in their name.)

    SELECT seg_owner, seg_name, count(*) AS Hits FROM
    V$LOGMNR_CONTENTS WHERE seg_name NOT LIKE '%$' GROUP BY
    seg_owner, seg_name;
    
    
    
  3. The following data is displayed (properly formatted):

    SEG_OWNER          SEG_NAME          Hits
    ---------          --------          ----
    CUST               ACCOUNT            384
    SCOTT              EMP                 12
    SYS                DONOR               12
    UNIV               DONOR              234
    UNIV               EXECDONOR          325
    UNIV               MEGADONOR           32
    
    


Go to previous page Go to next page
Oracle
Copyright © 1996-2001, Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback