Skip Headers

Oracle9i Database Administrator's Guide
Release 2 (9.2)

Part Number A96521-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

15
Managing Tables

This chapter describes the various aspects of managing tables, and includes the following topics:

Guidelines for Managing Tables

This section describes guidelines to follow when managing tables. Following these guidelines can make the management of your tables easier, and improve performance both when creating the table and later querying or updating it.

The following topics are discussed:

Design Tables Before Creating Them

Usually, the application developer is responsible for designing the elements of an application, including the tables. Database administrators are responsible for setting storage parameters and defining clusters for tables, based on information from the application developer about how the application works and the types of data expected.

Working with your application developer, carefully plan each table so that the following occurs:

Specify How Data Block Space Is to Be Used

By specifying the PCTFREE and PCTUSED parameters during the creation of each table, you can affect the efficiency of space utilization and amount of space reserved for updates to the current data in the data blocks of a table's data segment. The PCTFREE and PCTUSED parameters are discussed in "Managing Space in Data Blocks".


Note:

When you create a table in a locally managed tablespace for which automatic segment-space management is enabled, the need to specify the PCTFREE (or FREELISTS) parameter is eliminated. Automatic segment-space management is specified at the tablespace level. The Oracle database server automatically and efficiently manages free and used space within objects created in such tablespaces.

Locally managed tablespaces and automatic segment space management are discussed in "Locally Managed Tablespaces".


Specify the Location of Each Table

If you have the proper privileges and tablespace quota, you can create a new table in any tablespace that is currently online. It is advisable to specify the TABLESPACE clause in a CREATE TABLE statement to identify the tablespace that is to store the new table. If you do not specify a tablespace in a CREATE TABLE statement, the table is created in your default tablespace.

When specifying the tablespace to contain a new table, make sure that you understand implications of your selection. By properly specifying a tablespace during the creation of each table, you can:

The following situations illustrate how specifying incorrect storage locations for schema objects can affect a database:

Chapter 24, "Managing Users and Resources" contains information about assigning default tablespaces and tablespace quotas to users.

Consider Parallelizing Table Creation

You can utilize parallel execution when creating tables using a subquery (AS SELECT) in the CREATE TABLE statement. Because multiple processes work together to create the table, performance of the table creation operation is improved.

Parallelizing table creation is discussed in the section "Parallelizing Table Creation".

Consider Using NOLOGGING When Creating Tables

To create a table most efficiently use the NOLOGGING clause in the CREATE TABLE ... AS SELECT statement. The NOLOGGING clause causes minimal redo information to be generated during the table creation. This has the following benefits:

The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. Subsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo.

If you cannot afford to lose the table after you have created it (for example, you will no longer have access to the data used to create the table) you should take a backup immediately after the table is created. In some situations, such as for tables that are created for temporary use, this precaution may not be necessary.

In general, the relative performance improvement of specifying NOLOGGING is greater for larger tables than for smaller tables. For small tables, NOLOGGING has little effect on the time it takes to create a table. However, for larger tables the performance improvement can be significant, especially when you are also parallelizing the table creation.

Estimate Table Size and Set Storage Parameters

Estimating the sizes of tables before creating them is useful for the following reasons:

Whether or not you estimate table size before creation, you can explicitly set storage parameters when creating each table. (Clustered tables, discussed in Chapter 18, "Managing Clusters", automatically use the storage parameters of the cluster.) Any storage parameter that you do not explicitly set when creating or subsequently altering a table automatically uses the corresponding default storage parameter set for the tablespace in which the table resides. Storage parameters are discussed in "Setting Storage Parameters".

If you explicitly set the storage parameters for the extents of a table's data segment, try to store the table's data in a small number of large extents rather than a large number of small extents.

Plan for Large Tables

There are no limits on the physical size of tables and extents. You can specify the keyword UNLIMITED for MAXEXTENTS, thereby simplifying your planning for large objects, reducing wasted space and fragmentation, and improving space reuse. However, when the number of extents in a table grows very large, you can see an impact on performance when performing any operation requiring that table.


Note:

You cannot alter data dictionary tables to have MAXEXTENTS greater than the allowed block maximum.


If you have large tables in your database, consider the following recommendations:

Table Restrictions

Here are some restrictions to be aware of before you create tables:

Creating Tables

To create a new table in your schema, you must have the CREATE TABLE system privilege. To create a table in another user's schema, you must have the CREATE ANY TABLE system privilege. Additionally, the owner of the table must have a quota for the tablespace that contains the table, or the UNLIMITED TABLESPACE system privilege.

Create tables using the SQL statement CREATE TABLE.

This section contains the following topics:

Creating a Table

When you issue the following statement, you create a table named admin_emp in the hr schema and store it in the admin_tbs tablespace:

CREATE TABLE        hr.admin_emp (
         empno      NUMBER(5) PRIMARY KEY,
         ename      VARCHAR2(15) NOT NULL,
         job        VARCHAR2(10),
         mgr        NUMBER(5),
         hiredate   DATE DEFAULT (sysdate),
         sal        NUMBER(7,2),
         comm       NUMBER(7,2),
         deptno     NUMBER(3) NOT NULL
                     CONSTRAINT admin_dept_fkey REFERENCES hr.departments
                     (department_id))
   TABLESPACE admin_tbs
   STORAGE ( INITIAL 50K
             NEXT 50K
             MAXEXTENTS 10
             PCTINCREASE 25 );

In this example, integrity constraints are defined on several columns of the table. Integrity constraints are discussed in "Managing Integrity Constraints". Several segment attributes are also explicitly specified for the table. These are explained in Chapter 14, "Managing Space for Schema Objects".

Creating a Temporary Table

It is also possible to create a temporary table. The definition of a temporary table is visible to all sessions, but the data in a temporary table is visible only to the session that inserts the data into the table. You use the CREATE GLOBAL TEMPORARY TABLE statement to create a temporary table. The ON COMMIT keywords indicate if the data in the table is transaction-specific (the default) or session-specific:

This example creates a temporary table that is transaction specific:

CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT DELETE ROWS;

Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.

See Also:

Parallelizing Table Creation

When you specify the AS SELECT clause when creating a table, you can utilize parallel execution. The CREATE TABLE ... AS SELECT statement contains two parts: a CREATE part (DDL) and a SELECT part (query). Oracle can parallelize both parts of the statement. The CREATE part is parallelized if one of the following is true:

The query part is parallelized if all of the following are true:

If you parallelize the creation of a table, that table then has a parallel declaration (the PARALLEL clause) associated with it. Any subsequent DML or queries on the table, for which parallelization is possible, will attempt to use parallel execution.

The following simple example parallelizes the creation of a table:

CREATE TABLE hr.admin_emp_dept
     PARALLEL 
     AS SELECT * FROM hr.employees
     WHERE department_id = 10;

In this example the PARALLEL clause tells Oracle to select an optimum number of parallel execution servers when creating the table.

See Also:

Automatically Collecting Statistics on Tables

The PL/SQL package DBMS_STATS lets you generate and manage statistics for cost-based optimization. You can use this package to gather, modify, view, export, import, and delete statistics. You can also use this package to identify or name statistics that have been gathered.

You enable DBMS_STATS to automatically gather statistics for a table by specifying the MONITORING clause in the CREATE (or ALTER) TABLE statement. Then, you can effect automated statistics gathering by, for example, setting up a recurring job (perhaps by using job queues) that invokes DBMS_STATS.GATHER_TABLE_STATS with the GATHER STALE option at an appropriate interval for your application.

Monitoring tracks the approximate number of INSERT, UPDATE, and DELETE operations for the table since the last time statistics were gathered. Information about how many rows are affected is maintained in the SGA, until periodically (about every three hours) SMON incorporates the data into the data dictionary. This data dictionary information is made visible through the DBA_TAB_MODIFICATIONS,ALL_TAB_MODIFICATIONS, or USER_TAB_MODIFICATIONS views. Oracle uses these views to identify tables with stale statistics.

Using the MONITORING clause and the DBMS_STATS package enables the optimizer to generate accurate execution plans.

To disable monitoring of a table, specify the NOMONITORING clause.

See Also:

Oracle9i Database Performance Tuning Guide and Reference for a discussion of the exact mechanism for using the MONITORING clause and the DBMS_STATS package for gathering statistics

Altering Tables

You alter a table using the ALTER TABLE statement. To alter a table, the table must be contained in your schema, or you must have either the ALTER object privilege for the table or the ALTER ANY TABLE system privilege.

The following are many of the reasons for altering a table:

Some of the usages of the ALTER TABLE statement are presented in the following sections:

Altering Physical Attributes of a Table

When altering the data block space usage parameters (PCTFREE and PCTUSED) of a table, note that new settings apply to all data blocks used by the table, including blocks already allocated and subsequently allocated for the table. However, the blocks already allocated for the table are not immediately reorganized when space usage parameters are altered, but as necessary after the change. The data block storage parameters are described in "Managing Space in Data Blocks".

When altering the transaction entry settings (INITRANS, MAXTRANS) of a table, note that a new setting for INITRANS applies only to data blocks subsequently allocated for the table, while a new setting for MAXTRANS applies to all blocks (already and subsequently allocated blocks) of a table. To better understand these transaction entry setting parameters, see "Specifying the Transaction Entry Parameters: INITRANS and MAXTRANS".

The storage parameters INITIAL and MINEXTENTS cannot be altered. All new settings for the other storage parameters (for example, NEXT, PCTINCREASE) affect only extents subsequently allocated for the table. The size of the next extent allocated is determined by the current values of NEXT and PCTINCREASE, and is not based on previous values of these parameters. Storage parameters are discussed in "Setting Storage Parameters".

Moving a Table to a New Segment or Tablespace

The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also allows you to modify any of the table's storage attributes, including those which cannot be modified using ALTER TABLE.

The following statement moves the hr.admin_emp table to a new segment, specifying new storage parameters:

ALTER TABLE hr.admin_emp MOVE
      STORAGE ( INITIAL 20K
                NEXT 40K
                MINEXTENTS 2
                MAXEXTENTS 20
                PCTINCREASE 0 );

If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.

Manually Allocating Storage for a Table

Oracle dynamically allocates additional extents for the data segment of a table, as required. However, perhaps you want to allocate an additional extent for a table explicitly. For example, in an Oracle Real Application Clusters environment, an extent of a table can be allocated explicitly for a specific instance.

A new extent can be allocated for a table using the ALTER TABLE ... ALLOCATE EXTENT clause.

You can also explicitly deallocate unused space using the DEALLOCATE UNUSED clause of ALTER TABLE. This is described in "Deallocating Space".

See Also:

Oracle9i Real Application Clusters Administration for information about using the ALLOCATE EXTENT clause in an Oracle Real Application Clusters environment

Modifying an Existing Column's Definition

Use the ALTER TABLE ... MODIFY statement to modify an existing column's definition. You can modify a column's datatype, default value, or column constraint.

You can increase the length of an existing column, or decrease it, if all existing data satisfies the new length. You can change a column from byte semantics to CHAR semantics or vice versa. You must set the initialization parameter BLANK_TRIMMING=TRUE to decrease the length of a nonempty CHAR column.

If you are modifying a table to increase the length of a column of datatype CHAR, realize that this can be a time consuming operation and can require substantial additional storage, especially if the table contains many rows. This is because the CHAR value in each row must be blank-padded to satisfy the new column length.

See Also:

Oracle9i SQL Reference for additional information about modifying table columns and additional restrictions

Adding Table Columns

To add a column to an existing table, use the ALTER TABLE ... ADD statement.

The following statement alters the hr.admin_emp table to add a new column named bonus:

ALTER TABLE hr.admin_emp
      ADD (bonus NUMBER (7,2));

If a new column is added to a table, the column is initially NULL unless you specify the DEFAULT clause. When you specify a default value, Oracle updates each row in the new column with the values specified.

You can add a column with a NOT NULL constraint to a table only if the table does not contain any rows, or you specify a default value.

See Also:

Oracle9i SQL Reference for additional information about adding table columns and additional restrictions

Renaming Table Columns

Oracle allows you to rename existing columns in a table. Use the RENAME COLUMN clause of the ALTER TABLE statement to rename a column. The new name must not conflict with the name of any existing column in the table. No other clauses are allowed in conjunction with the RENAME COLUMN clause.

The following statement renames the comm column of the hr.admin_emp table.

ALTER TABLE hr.admin_emp
      RENAME COLUMN comm TO commission;

As noted earlier, altering a table's column can invalidate dependent objects. However, when you rename a column, Oracle updates associated data dictionary tables to ensure that function-based indexes and check constraints remain valid.

Oracle also allows you to rename column constraints. This is discussed in "Renaming Constraints".


Note:

The RENAME TO clause of ALTER TABLE appears similar in syntax to the RENAME COLUMN clause, but is used for renaming the table itself.


Dropping Table Columns

You can drop columns that are no longer needed from a table, including an index-organized table. This provides a convenient means to free space in a database, and avoids your having to export/import data then re-create indexes and constraints.

You cannot drop all columns from a table, nor can you drop columns from a table owned by SYS. Any attempt to do so results in an error.

See Also:

Oracle9i SQL Reference for information about additional restrictions and options for dropping columns from a table

Removing Columns from Tables

When you issue an ALTER TABLE ... DROP COLUMN statement, the column descriptor and the data associated with the target column are removed from each row in the table. You can drop multiple columns with one statement. The following statements are examples of dropping columns from the hr.admin_emp table.

This statement drops only the sal column:

ALTER TABLE hr.admin_emp DROP COLUMN sal;

The following statement drops both the bonus and comm columns:

ALTER TABLE hr.admin_emp DROP (bonus, commission);

Marking Columns Unused

If you are concerned about the length of time it could take to drop column data from all of the rows in a large table, you can use the ALTER TABLE ... SET UNUSED statement. This statement marks one or more columns as unused, but does not actually remove the target column data or restore the disk space occupied by these columns. However, a column that is marked as unused is not displayed in queries or data dictionary views, and its name is removed so that a new column can reuse that name. All constraints, indexes, and statistics defined on the column are also removed.

To mark the hiredate and mgr columns as unused, execute the following statement:

ALTER TABLE hr.admin_emp SET UNUSED (hiredate, mgr);

You can later remove columns that are marked as unused by issuing an ALTER TABLE ... DROP UNUSED COLUMNS statement. Unused columns are also removed from the target table whenever an explicit drop of any particular column or columns of the table is issued.

The data dictionary views USER_UNUSED_COL_TABS, ALL_UNUSED_COL_TABS, or DBA_UNUSED_COL_TABS can be used to list all tables containing unused columns. The COUNT field shows the number of unused columns in the table.

SELECT * FROM DBA_UNUSED_COL_TABS;

OWNER                       TABLE_NAME                  COUNT
--------------------------- --------------------------- -----
HR                          ADMIN_EMP                       2

Removing Unused Columns

The ALTER TABLE ... DROP UNUSED COLUMNS statement is the only action allowed on unused columns. It physically removes unused columns from the table and reclaims disk space.

In the example that follows the optional keyword CHECKPOINT is specified. This option causes a checkpoint to be applied after processing the specified number of rows, in this case 250. Checkpointing cuts down on the amount of undo logs accumulated during the drop column operation to avoid a potential exhaustion of undo space.

ALTER TABLE hr.admin_emp DROP UNUSED COLUMNS CHECKPOINT 250;

Redefining Tables Online

In highly available systems, it is occasionally necessary to redefine large "hot" tables to improve the performance of queries or DML performed against these tables. Oracle provide a mechanism to redefine tables online. This mechanism provides a significant increase in availability compared to traditional methods of redefining tables that require tables to be taken offline.

When a table is redefined online, it is accessible to DML during much of the redefinition process. The table is locked in the exclusive mode only during a very small window which is independent of the size of the table and the complexity of the redefinition.

This section contains the following topics:

Features of Online Table Redefinition

Online table redefinition enables you to:

The DBMS_REDEFINITION Package

The mechanism for performing online redefinition is the PL/SQL package DBMS_REDEFINITION. Execute privileges on this package is granted to EXECUTE_CATALOG_ROLE. In addition to having execute privileges on this package, you must be granted the following privileges:

Steps for Online Redefinition of Tables

In order to perform an online redefinition of a table the user must perform the following steps.

  1. Choose one of the following two methods of redefinition:
    • The first method of redefinition, and the preferred method, is to use the primary keys to perform the redefinition. For this method, the pre-redefinition and the post-redefinition versions of the tables should have the same primary key columns. This is the default method of redefinition.
    • The second method of redefinition is to use ROWIDs. For this method, the table to be redefined should not be an index organized table. Also, in this method of redefinition, a hidden column (M_ROW$$) is added to the post-redefined version of the table and it is recommended that this column be marked as unused or dropped after the redefinition is completed.
  2. Verify that the table can be online redefined by invoking the DBMS_REDEFINITION.CAN_REDEF_TABLE() procedure and specifying the method of redefinition to be used. If the table is not a candidate for online redefinition, then this procedure raises an error indicating why the table cannot be online redefined.
  3. Create an empty interim table (in the same schema as the table to be redefined) with all of the desired attributes. If columns are to be dropped, do not include them in the definition of the interim table. If a column is to be added, then add the column definition to the interim table.

    It is possible to perform table redefinition in parallel. If you specify a degree of parallelism on both of the tables and you ensure that parallel execution is enabled for the session, Oracle will use parallel execution whenever possible to perform the redefinition.

  4. Start the redefinition process by calling DBMS_REDEFINITION.START_REDEF_TABLE(), providing the following:
    • The table to be redefined
    • The interim table name
    • The column mapping
    • The method of redefinition

    If the column mapping information is not supplied, then it is assumed that all the columns (with their names unchanged) are to be included in the interim table. If the column mapping is supplied, then only those columns specified explicitly in the column mapping are considered. If the method of redefinition is not specified, then the default method of redefinition using primary keys is assumed.

  5. Create any triggers, indexes, grants and constraints on the interim table. Any referential constraints involving the interim table (that is, the interim table is either a parent or a child table of the referential constraint) must be created disabled. Until the redefinition process is either completed or aborted, any trigger defined on the interim table will not execute.

    When the redefinition is completed, the triggers, constraints, indexes and grants associated with the interim table replace those on the table being redefined. The referential constraints involving the interim table (created disabled) transfer to the table being redefined and become enabled after the redefinition is complete.

  6. Execute the DBMS_REDEFINITION.FINISH_REDEF_TABLE() procedure to complete the redefinition of the table. During this procedure, the original table is locked in the exclusive mode for a very small window. This window is independent of the amount of data in the original table. Also, as part of this procedure, the following occurs:
    1. The original table is redefined such that it has all the attributes, indexes, constraints, grants and triggers of the interim table
    2. The referential constraints involving the interim table now involve the post redefined table and are enabled.
  7. Optionally, rename any indexes, triggers, and constraints that were created on the interim table and that are now defined on the redefined table. If the redefinition was done using ROWIDs, the post-redefined table will have a hidden column (M_ROW$$) and it is recommended that the user set this hidden column to unused as follows:
    ALTER TABLE table_name SET UNUSED (M_ROW$$)
    
    
  8. The following is the end result of the redefinition process:
    • The original table is redefined with the attributes and features of the interim table.
    • The triggers, grants, indexes and constraints defined on the interim table after START_REDEF_TABLE() and before FINISH_REDEF_TABLE() are now defined on the post-redefined table. Any referential constraints involving the interim table before the redefinition process was finished now involve the post-redefinition table and are enabled.
    • Any indexes, triggers, grants and constraints defined on the original table (prior to redefinition) are transferred to the interim table and are dropped when the user drops the interim table. Any referential constraints involving the original table before the redefinition now involve the interim table and are disabled.
    • Any PL/SQL procedures and cursors defined on the original table (prior to redefinition) are invalidated. They are automatically revalidated (this revalidation can fail if the shape of the table was changed as a result of the redefinition process) whenever they are used next.

Intermediate Synchronization

After the redefinition process has been started by calling START_REDEF_TABLE() and before FINISH_REDEF_TABLE() has been called, it is possible that a large number of DML statements have been executed on the original table. If you know this is the case, it is recommended that you periodically synchronize the interim table with the original table. This is done by calling the DBMS_REDEFINITION.SYNC_INTERIM_TABLE() procedure. Calling this procedure reduces the time taken by FINISH_REDEF_TABLE() to complete the redefinition process.

The small amount of time that the original table is locked during FINISH_REDEF_TABLE() is independent of whether SYNC_INTERIM_TABLE() has been called.

Abort and Cleanup After Errors

In the event that an error is raised during the redefinition process, or if you choose to abort the redefinition process, call DBMS_REDEFINITION.ABORT_REDEF_TABLE(). This procedure drops temporary logs and tables associated with the redefinition process. After this procedure is called, the user can drop the interim table and its associated objects.

Example of Online Table Redefinition

This example illustrates online redefinition of the previously created table hr.admin_emp, which at this point only contains columns: empno, ename, job, deptno. The table is redefined as follows:

The steps in this redefinition are illustrated below.

  1. Verify that the table is a candidate for online redefinition.
    BEGIN
    DBMS_REDEFINITION.CAN_REDEF_TABLE('hr','admin_emp',
            dbms_redefinition.cons_use_pk);
    END;
    /
    
    
  2. Create an interim table hr.int_admin_emp.
    CREATE TABLE hr.int_admin_emp
           (empno      NUMBER(5) PRIMARY KEY,
             ename      VARCHAR2(15) NOT NULL,
             job        VARCHAR2(10),
             mgr        NUMBER(5),
             hiredate   DATE DEFAULT (sysdate),
             sal        NUMBER(7,2),
             deptno     NUMBER(3) NOT NULL,
             bonus      NUMBER (7,2) DEFAULT(1000))
         PARTITION BY RANGE(empno)
           (PARTITION emp1000 VALUES LESS THAN (1000) TABLESPACE admin_tbs,
            PARTITION emp2000 VALUES LESS THAN (2000) TABLESPACE admin_tbs2);
    
    
  3. Start the redefinition process.
    BEGIN
    DBMS_REDEFINITION.START_REDEF_TABLE('hr', 'admin_emp','int_admin_emp',
           'empno empno, ename ename, job job, deptno+10 deptno, 0 bonus',
            dbms_redefinition.cons_use_pk);
    END;
    /
    
    
  4. Create any triggers, indexes and constraints on hr.int_admin_emp. During the final step of redefinition, these are transferred back to the original table. Any referential constraints involved on hr.int_admin_emp should be disabled. You can define any grants associated with the interim table. These replace the grants on the original table after the redefinition.
    ALTER TABLE hr.int_admin_emp ADD CONSTRAINT admin_dept_fkey2
         FOREIGN KEY (deptno) REFERENCES hr.departments (department_id);
    ALTER TABLE hr.int_admin_emp MODIFY CONSTRAINT admin_dept_fkey2
         DISABLE KEEP INDEX;
    
    

    The disabled constraint, admin_dept_fkey2, will be enabled automatically as part of the finish redefinition process and will then involve the newly redefined admin_emp table.

  5. Optionally, synchronize the interim table hr.int_admin_emp.
    BEGIN 
    DBMS_REDEFINITION.SYNC_INTERIM_TABLE('hr', 'admin_emp', 'int_admin_emp');
    END;
    /
    
    
  6. Complete the redefinition.
    BEGIN
    DBMS_REDEFINITION.FINISH_REDEF_TABLE('hr', 'admin_emp', 'int_admin_emp');
    END;
    /
    
    

    The table hr.admin_emp is locked in the exclusive mode only for a small window toward the end of this step. After this call the table hr.admin_emp is redefined such that it has all the attributes of the hr.int_admin_emp table.

  7. Drop the interim table.

Restrictions

The following restrictions apply to the online redefinition of tables:

Dropping Tables

To drop a table, the table must be contained in your schema or you must have the DROP ANY TABLE system privilege.

To drop a table that is no longer needed, use the DROP TABLE statement. The following statement drops the hr.int_admin_emp table:

DROP TABLE hr.int_admin_emp;

If the table to be dropped contains any primary or unique keys referenced by foreign keys of other tables and you intend to drop the FOREIGN KEY constraints of the child tables, include the CASCADE clause in the DROP TABLE statement, as shown below:

DROP TABLE hr.admin_emp CASCADE CONSTRAINTS;

Caution:

Before dropping a table, familiarize yourself with the consequences of doing so:

  • Dropping a table removes the table definition from the data dictionary. All rows of the table are no longer accessible.
  • All indexes and triggers associated with a table are dropped.
  • All views and PL/SQL program units dependent on a dropped table remain, yet become invalid (not usable). See "Managing Object Dependencies" for information about how Oracle manages dependencies.
  • All synonyms for a dropped table remain, but return an error when used.
  • All extents allocated for a table that is dropped are returned to the free space of the tablespace and can be used by any other object requiring new extents or new objects. All rows corresponding to a clustered table are deleted from the blocks of the cluster. Clustered tables are the subject of Chapter 18, "Managing Clusters".

Perhaps instead of dropping a table, you want to truncate it. The TRUNCATE statement provides a fast, efficient method for deleting all rows from a table, but it does not affect any structures associated with the table being truncated (column definitions, constraints, triggers, and so forth) or authorizations. The TRUNCATE statement is discussed in "Truncating Tables and Clusters".

Managing Index-Organized Tables

This section describes aspects of managing index-organized tables, and includes the following topics:

What are Index-Organized Tables

An index-organized table has a storage organization that is a variant of a primary B-tree. Unlike an ordinary (heap-organized) table whose data is stored as an unordered collection (heap), data for an index-organized table is stored in a B-tree index structure in a primary key sorted manner. Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the nonkey column values as well.

Why use Index-Organized Tables

Index-organized tables provide fast key-based access to table data for queries involving exact match and range searches. Changes to the table data (such as adding new rows, updating rows, or deleting rows) result only in updating the index structure (because there is no separate table storage area).

Also, storage requirements are reduced because key columns are not duplicated in the table and index. The remaining nonkey columns are stored in the index structure.

Index-organized tables are particularly useful when you are using applications that must retrieve data based on a primary key. Index-organized tables are also suitable for modeling application-specific index structures. For example, content-based information retrieval applications containing text, image and audio data require inverted indexes that can be effectively modeled using index-organized tables.

Differences Between Index Organized and Regular Tables

As shown in Figure 15-1, the index-organized table is somewhat similar to a configuration consisting of an ordinary table and an index on one or more of the table columns, but instead of maintaining two separate storage structures, one for the table and one for the B-tree index, the database system maintains only a single B-tree index. Also, rather than having a row's rowid stored in the index entry, the nonkey column values are stored. Thus, each B-tree index entry contains:

primary_key_value, non_primary_key_column_values

Figure 15-1 Structure of Regular Table versus an Index-Organized Table

Text description of admin029.gif follows
Text description of the illustration admin029.gif


Applications manipulate the index-organized table just like an ordinary table, using SQL statements. However, the database system performs all operations by manipulating the corresponding B-tree index.

See Also:

Creating Index-Organized Tables

You use the CREATE TABLE statement to create index-organized tables, but you must provide the following additional information:

The following example creates an index-organized table:

CREATE TABLE admin_docindex(
        token char(20), 
        doc_id NUMBER,
        token_frequency NUMBER,
        token_offsets VARCHAR2(512),
        CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
    ORGANIZATION INDEX 
    TABLESPACE admin_tbs
    PCTTHRESHOLD 20
    OVERFLOW TABLESPACE admin_tbs2;

The above example shows that the ORGANIZATION INDEX qualifier specifies an index-organized table, where the key columns and nonkey columns reside in an index defined on columns that designate the primary key (token, doc_id) for the table.

Index-organized tables can store object types. The following example creates object type admin_typ, then creates an index-organized table containing a column of object type admin_typ:

CREATE OR REPLACE TYPE admin_typ AS OBJECT
    (col1 NUMBER, col2 VARCHAR2(6));
CREATE TABLE admin_iot (c1 NUMBER primary key, c2 admin_typ)
    ORGANIZATION INDEX;

You can also create an index-organized table of object types. For example:

CREATE TABLE admin_iot2 OF admin_typ (col1 PRIMARY KEY)
    ORGANIZATION INDEX;
See Also:

"Creating Partitioned Index-Organized Tables" for information about creating partitioned index-organized tables

Using the AS Subquery

You can create an index-organized table using the AS subquery. Creating an index-organized table in this manner enables you to load the table in parallel by using the PARALLEL option.

The following statement creates an index-organized table (in parallel) by selecting rows from the conventional table hr.jobs:

CREATE TABLE admin_iot3(i PRIMARY KEY, j, k, l) 
     ORGANIZATION INDEX PARALLEL (DEGREE 2)
     AS SELECT * FROM hr.jobs;

Using the Overflow Clause

The overflow clause specified in the earlier example indicates that any nonkey columns of rows exceeding 20% of the block size are placed in a data segment stored in the admin_tbs2 tablespace. The key columns should fit the specified threshold.

If an update of a nonkey column causes the row to decrease in size, Oracle identifies the row piece (head or tail) to which the update is applicable and rewrites that piece.

If an update of a nonkey column causes the row to increase in size, Oracle identifies the piece (head or tail) to which the update is applicable and rewrites that row piece. If the update's target turns out to be the head piece, note that this piece can again be broken into 2 to keep the row size below the specified threshold.

The nonkey columns that fit in the index leaf block are stored as a row head-piece that contains a ROWID field linking it to the next row piece stored in the overflow data segment. The only columns that are stored in the overflow area are those that do not fit.

Choosing and Monitoring a Threshold Value

You should choose a threshold value that can accommodate your key columns, as well as the first few nonkey columns (if they are frequently accessed).

After choosing a threshold value, you can monitor tables to verify that the value you specified is appropriate. You can use the ANALYZE TABLE ... LIST CHAINED ROWS statement to determine the number and identity of rows exceeding the threshold value.

See Also:

Oracle9i SQL Reference for details about this use of the ANALYZE statement

Using the INCLUDING clause

In addition to specifying PCTTHRESHOLD, you can use the INCLUDING clause to control which nonkey columns are stored with the key columns. Oracle accommodates all nonkey columns up to the column specified in the INCLUDING clause in the index leaf block, provided it does not exceed the specified threshold. All nonkey columns beyond the column specified in the INCLUDING clause are stored in the overflow area.


Note:

Oracle moves all primary key columns of an indexed-organized table to the beginning of the table (in their key order), in order to provide efficient primary key based access. As an example:

CREATE TABLE admin_iot4(a INT, b INT, c INT, d INT, 
                primary key(c,b))
    ORGANIZATION INDEX;

The stored column order is: c b a d (instead of: a b c d). The last primary key column is b, based on the stored column order. The INCLUDING column can be the last primary key column (b in this example), or any nonkey column (that is, any column after b in the stored column order).


The example presented earlier can be modified to create an index-organized table where the token_offsets column value is always stored in the overflow area:

CREATE TABLE admin_docindex2(
        token CHAR(20), 
        doc_id NUMBER,
        token_frequency NUMBER,
        token_offsets VARCHAR2(512),
        CONSTRAINT pk_admin_docindex2 PRIMARY KEY (token, doc_id))
    ORGANIZATION INDEX 
    TABLESPACE admin_tbs
    PCTTHRESHOLD 20
    INCLUDING token_frequency
    OVERFLOW TABLESPACE admin_tbs2;

Here, only nonkey columns prior to token_offsets (in this case a single column only) are stored with the key column values in the index leaf block.

Using Key Compression

Creating an index-organized table using key compression enables you to eliminate repeated occurrences of key column prefix values.

Key compression breaks an index key into a prefix and a suffix entry. Compression is achieved by sharing the prefix entries among all the suffix entries in an index block. This sharing can lead to huge savings in space, allowing you to store more keys in each index block while improving performance.

You can enable key compression using the COMPRESS clause while:

You can also specify the prefix length (as the number of key columns), which identifies how the key columns are broken into a prefix and suffix entry.

CREATE TABLE admin_iot5(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k)) 
    ORGANIZATION INDEX COMPRESS;

The preceding statement is equivalent to the following statement:

CREATE TABLE admin_iot6(i INT, j INT, k INT, l INT, PRIMARY KEY(i, j, k)) 
    ORGANIZATION INDEX COMPRESS 2;

For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4) the repeated occurrences of (1,2), (1,3) are compressed away.

You can also override the default prefix length used for compression as follows:

CREATE TABLE admin_iot7(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k)) 
    ORGANIZATION INDEX COMPRESS 1;

For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4), the repeated occurrences of 1 are compressed away.

You can disable compression as follows:

ALTER TABLE admin_iot5 MOVE NOCOMPRESS;
See Also:

Oracle9i Database Concepts for more details about key compression

Maintaining Index-Organized Tables

Index-organized tables differ from regular tables only in physical organization; logically, they are manipulated in the same manner. You can use an index-organized table in place of a regular table in INSERT, SELECT, DELETE, and UPDATE statements.

Altering Index-Organized Tables

You can use the ALTER TABLE statement to modify physical and storage attributes for both primary key index and overflow data segments. All the attributes specified prior to the OVERFLOW keyword are applicable to the primary key index segment. All attributes specified after the OVERFLOW key word are applicable to the overflow data segment. For example, you can set the INITRANS of the primary key index segment to 4 and the overflow of the data segment INITRANS to 6 as follows:

ALTER TABLE admin_docindex INITRANS 4 OVERFLOW INITRANS 6;

You can also alter PCTTHRESHOLD and INCLUDING column values. A new setting is used to break the row into head and overflow tail pieces during subsequent operations. For example, the PCTHRESHOLD and INCLUDING column values can be altered for the admin_docindex table as follows:

ALTER TABLE admin_docindex PCTTHRESHOLD 15 INCLUDING doc_id;

By setting the INCLUDING column to doc_id, all the columns that follow token_frequency and token_offsets, are stored in the overflow data segment.

For index-organized tables created without an overflow data segment, you can add an overflow data segment by using the ADD OVERFLOW clause. For example, you can add an overflow segment to table admin_iot3 as follows:

ALTER TABLE admin_iot3 ADD OVERFLOW TABLESPACE admin_tbs2;

Moving (Rebuilding) Index-Organized Tables

Because index-organized tables are primarily stored in a B-tree index, you can encounter fragmentation as a consequence of incremental updates. However, you can use the ALTER TABLE ... MOVE statement to rebuild the index and reduce this fragmentation.

The following statement rebuilds the index-organized table admin_docindex:

ALTER TABLE admin_docindex MOVE;

You can rebuild index-organized tables online using the ONLINE keyword. The overflow data segment, if present, is rebuilt when the OVERFLOW keyword is specified. For example, to rebuild the admin_docindex table but not the overflow data segment, perform a move online as follows:

ALTER TABLE admin_docindex MOVE ONLINE;

To rebuild the admin_docindex table along with its overflow data segment perform the move online as shown in the following statement. This statement also illustrates moving both the table and overflow data segment to new tablespaces.

ALTER TABLE admin_docindex MOVE TABLESPACE admin_tbs2 
    OVERFLOW TABLESPACE admin_tbs3;

In this last example, an index organized table with a LOB column (CLOB) is built. Then the table is moved while the LOB index and data segment are rebuilt and moved to a new tablespace.

CREATE TABLE admin_iot_lob
   (c1 number (6) primary key,
    admin_lob CLOB)
   ORGANIZATION INDEX
   LOB (admin_lob) STORE AS (TABLESPACE admin_tbs2);
ALTER TABLE admin_iot_lob MOVE LOB (admin_lob) STORE AS (TABLESPACE admin_tbs3); 

Updating the Key Column

A key column update is logically equivalent to deleting the row with the old key value and inserting the row with the new key value at the appropriate place to maintain the primary key order.

Logically, in the following example, the admin_docindex table row for token='coins' and doc_id=10 is deleted and a new row for token='medals' and doc_id=10 is inserted:

UPDATE admin_docindex
    SET token='medals'
    WHERE token='coins' and doc_id=10;

Analyzing Index-Organized Tables

Just like conventional tables, index-organized tables are analyzed using the ANALYZE statement. For example, the following statement gathers statistics for the admin_docindex table:

ANALYZE TABLE admin_docindex COMPUTE STATISTICS; 

Note:

Oracle recommends that the DBMS_STATS package be used for collecting optimizer statistics. See "Analyzing Tables, Indexes, and Clusters" for information about collecting optimizer statistics, and for information about using the ANALYZE statement to collect non-optimizer statistics, validate object structure, and list chained rows.


The ANALYZE statement analyzes both the primary key index segment and the overflow data segment, and computes logical as well as physical statistics for the table.

Using the ORDER BY Clause with Index-Organized Tables

If an ORDER BY clause only references the primary key column or a prefix of it, then the optimizer avoids the sorting overhead as the rows are returned sorted on the primary key columns.

The following queries avoid sorting overhead because the data is already sorted on the primary key:

SELECT * FROM admin_docindex2 ORDER BY token, doc_id;
SELECT * FROM admin_docindex2 ORDER BY token;

If, however, you have an ORDER BY clause on a suffix of the primary key column or non-primary key columns, additional sorting is required (assuming no other secondary indexes are defined).

SELECT * FROM admin_docindex2 ORDER BY doc_id;
SELECT * FROM admin_docindex2 ORDER BY token_frequency

Converting Index-Organized Tables to Regular Tables

You can convert index-organized tables to regular tables using the Oracle IMPORT or EXPORT utilities, or the CREATE TABLE ... AS SELECT statement.

To convert an index-organized table to a regular table:

Managing External Tables

Oracle allows you read-only access to data in external tables. External tables are defined as tables that do not reside in the database, and can be in any format for which an access driver is provided. By providing Oracle with metadata describing an external table, Oracle is able to expose the data in the external table as if it were data residing in a regular database table. The external data can be queried directly and in parallel using SQL.

You can, for example, select, join, or sort external table data. You can also create views and synonyms for external tables. However, no DML operations (UPDATE, INSERT, or DELETE) are possible, and no indexes can be created, on external tables.


Note:

The DBMS_STATS package can be used for gathering statistics for external tables. The ANALYZE statement is not supported for gathering statistics for external tables.

For information about using the DBMS_STATS package, see Oracle9i Database Performance Tuning Guide and Reference


The means of defining the metadata for external tables is through the CREATE TABLE ... ORGANIZATION EXTERNAL statement. This external table definition can be thought of as a view that allows running any SQL query against external data without requiring that the external data first be loaded into the database. An access driver is the actual mechanism used to read the external data in the table.

Oracle provides an access driver for external tables. It allows the reading of data from external files using the Oracle loader technology. The ORACLE_LOADER access driver provides data mapping capabilities which are a subset of the control file syntax of SQL*Loader utility.

Oracle's external tables feature provides a valuable means for performing basic extraction, transformation, and transportation (ETT) tasks that are common for datawarehousing.

These following sections discuss the DDL statements that are supported for external tables. Only DDL statements discussed are supported, and not all clauses of these statements are supported.

Creating External Tables

You create external tables using the ORGANIZATION EXTERNAL clause of the CREATE TABLE statement. You are not in fact creating a table; that is, an external table does not have any extents associated with it. Rather, you are creating metadata in the data dictionary that enables you to access external data.

The following example creates an external table, then uploads the data to a database table.

EXAMPLE: Creating an External Table and Loading Data

The file empxt1.dat contains the following sample data:

360,Jane,Janus,ST_CLERK,121,17-MAY-2001,3000,0,50,jjanus
361,Mark,Jasper,SA_REP,145,17-MAY-2001,8000,.1,80,mjasper
362,Brenda,Starr,AD_ASST,200,17-MAY-2001,5500,0,10,bstarr
363,Alex,Alda,AC_MGR,145,17-MAY-2001,9000,.15,80,aalda

The file empxt2.dat contains the following sample data:

401,Jesse,Cromwell,HR_REP,203,17-MAY-2001,7000,0,40,jcromwel
402,Abby,Applegate,IT_PROG,103,17-MAY-2001,9000,.2,60,aapplega
403,Carol,Cousins,AD_VP,100,17-MAY-2001,27000,.3,90,ccousins
404,John,Richardson,AC_ACCOUNT,205,17-MAY-2001,5000,0,110,jrichard

The following SQL statements create an external table in the hr schema named admin_ext_employees and load its data into the hr.employees table.

CONNECT  /  AS SYSDBA;
-- Set up directories and grant access to hr 
CREATE OR REPLACE DIRECTORY admin_dat_dir
    AS '/net/dlsun301/private6/examples/submitted/ADMIN/flatfiles/data'; 
CREATE OR REPLACE DIRECTORY admin_log_dir 
    AS '/net/dlsun301/private6/examples/submitted/ADMIN/flatfiles/log'; 
CREATE OR REPLACE DIRECTORY admin_bad_dir 
    AS '/net/dlsun301/private6/examples/submitted/ADMIN/flatfiles/bad'; 
GRANT READ ON DIRECTORY admin_dat_dir TO hr; 
GRANT WRITE ON DIRECTORY admin_log_dir TO hr; 
GRANT WRITE ON DIRECTORY admin_bad_dir TO hr;
-- hr connects 
CONNECT hr/hr
-- create the external table
CREATE TABLE admin_ext_employees
                   (employee_id       NUMBER(4), 
                    first_name        VARCHAR2(20),
                    last_name         VARCHAR2(25), 
                    job_id            VARCHAR2(10),
                    manager_id        NUMBER(4),
                    hire_date         DATE,
                    salary            NUMBER(8,2),
                    commission_pct    NUMBER(2,2),
                    department_id     NUMBER(4),
                    email             VARCHAR2(25) 
                   ) 
     ORGANIZATION EXTERNAL 
     ( 
       TYPE ORACLE_LOADER 
       DEFAULT DIRECTORY admin_dat_dir 
       ACCESS PARAMETERS 
       ( 
         records delimited by newline 
         badfile admin_bad_dir:'empxt%a_%p.bad' 
         logfile admin_log_dir:'empxt%a_%p.log' 
         fields terminated by ',' 
         missing field values are null 
         ( employee_id, first_name, last_name, job_id, manager_id, 
           hire_date char date_format date mask "dd-mon-yyyy", 
           salary, commission_pct, department_id, email 
         ) 
       ) 
       LOCATION ('empxt1.dat', 'empxt2.dat') 
     ) 
     PARALLEL 
     REJECT LIMIT UNLIMITED; 
-- enable parallel for loading (good if lots of data to load)
ALTER SESSION ENABLE PARALLEL DML;
-- load the data in hr employees table
INSERT INTO employees (employee_id, first_name, last_name, job_id, manager_id,
                       hire_date, salary, commission_pct, department_id, email) 
            SELECT * FROM admin_ext_employees;

The following paragraphs contain descriptive information about this example.

The first few statements in this example create the directory objects for the operating system directories that contain the data sources, and for the bad record and log files specified in the access parameters. You must also grant READ or WRITE directory object privileges, as appropriate.


Note:

When creating a directory object or BFILEs, ensure that the following conditions are met:

  • The operating system file must not be a symbolic or hard link.
  • The operating system directory path named in the Oracle DIRECTORY object must be an existing OS directory path.
  • The operating system directory path named in the Oracle DIRECTORY object should not contain any symbolic links in its components.

The TYPE specification is given only to illustrate its use. If not specified, ORACLE_LOADER is the default access driver. The access parameters, specified in the ACCESS PARAMETERS clause, are opaque to Oracle. These access parameters are defined by the access driver, and are provided to the access driver by Oracle when the external table is accessed. See Oracle9i Database Utilities for a description of the ORACLE_LOADER access parameters.

The PARALLEL clause enables parallel query on the data sources. The granule of parallelism is by default a data source, but parallel access within a data source is implemented whenever possible. For example, if PARALLEL=3 were specified, then more than one parallel execution server could be working on a data source. But, parallel access within a data source is provided by the access driver only if all of the following conditions are met:

The REJECT LIMIT clause specifies that there is no limit on the number of errors that can occur during a query of the external data. For parallel access, this limit applies to each parallel execution server independently. For example, if REJECT LIMIT 10 is specified, each parallel query process is allowed 10 rejections. Hence, the only precisely enforced values for REJECT LIMIT on parallel query are 0 and UNLIMITED.

In this example, the INSERT INTO TABLE statement generates a dataflow from the external data source to the Oracle SQL engine where data is processed. As data is parsed by the access driver from the external table sources and provided to the external table interface, the external data is converted from its external representation to its Oracle internal data type.

See Also:

Oracle9i SQL Reference provides details of the syntax of the CREATE TABLE statement for creating external tables and specifies restrictions on the use of clauses

Altering External Tables

You can use any of the following ALTER TABLE clauses to change the characteristics of an external table. No other clauses are permitted.

ALTER TABLE Clause Description Example

REJECT LIMIT

Changes the reject limit

ALTER TABLE admin_ext_employees
REJECT LIMIT 100;

DEFAULT DIRECTORY

Changes the default directory specification

ALTER TABLE admin_ext_employees 
    DEFAULT DIRECTORY admin_dat2_dir;

ACCESS PARAMETERS

Allows access parameters to be changed without dropping and re-creating the external table metadata

ALTER TABLE admin_ext_employees
    ACCESS PARAMETERS
       (FIELDS TERMINATED BY ';');

LOCATION

Allows data sources to be changed without dropping and re-creating the external table metadata

ALTER TABLE admin_ext_employees
LOCATION ('empxt3.txt',
          'empxt4.txt');

PARALLEL

No difference from regular tables. Allows degree of parallelism to be changed.

No new syntax

ADD COLUMN

No difference from regular tables. Allows a column to be added to an external table.

No new syntax

MODIFY COLUMN

No difference from regular tables. Allows an external table column to be modified.

No new syntax

DROP COLUMN

No difference from regular tables. Allows an external table column to be dropped.

No new syntax

RENAME TO

No difference from regular tables. Allows external table to be renamed.

No new syntax

Dropping External Tables

For an external table, the DROP TABLE statement removes only the table metadata in the database. It has no affect on the actual data, which resides outside of the database.

System and Object Privileges for External Tables

System and object privileges for external tables are a subset of those for regular table. Only the following system privileges are applicable to external tables:

Only the following object privileges are applicable to external tables:

However, object privileges associated with a directory are:

For external tables, READ privileges are required on directory objects that contain data sources, while WRITE privileges are required for directory objects containing bad, log, or discard files.

Viewing Information About Tables

The following views allow you to access information about tables.

View Description

DBA_TABLES

ALL_TABLES

USER_TABLES

DBA view describes all relational tables in the database. ALL view describes all tables accessible to the user. USER view is restricted to tables owned by the user. Some columns in these views contain statistics that are generated by the DBMS_STATS package or ANALYZE statement.

DBA_TAB_COLUMNS

ALL_TAB_COLUMNS

USER_TAB_COLUMNS

These views describe the columns of tables, views, and clusters in the database. Some columns in these views contain statistics that are generated by the DBMS_STATS package or ANALYZE statement.

DBA_ALL_TABLES

ALL_ALL_TABLES

USER_ALL_TABLES

These views describe all relational and object tables in the database. Object tables are not specifically discussed in this book.

DBA_TAB_COMMENTS

ALL_TAB_COMMENTS

USER_TAB_COMMENTS

These views display comments for tables and views. Comments are entered using the COMMENT statement.

DBA_COL_COMMENTS

ALL_COL_COMMENTS

USER_COL_COMMENTS

These views display comments for table and view columns. Comments are entered using the COMMENT statement.

DBA_EXTERNAL_TABLES

ALL_EXTERNAL_TABLES

USER_EXTERNAL_TABLES

These views list the specific attributes of external tables in the database.

DBA_EXTERNAL_LOCATIONS

ALL_EXTERNAL_LOCATIONS

USER_EXTERNAL_LOCATIONS

These views list the data sources for external tables.

DBA_TAB_HISTOGRAMS

ALL_TAB_HISTOGRAMS

USER_TAB_HISTOGRAMS

These views describe histograms on tables and views.

DBA_TAB_COL_STATISTICS

ALL_TAB_COL_STATISTICS

USER_TAB_COL_STATISTICS

These views provide column statistics and histogram information extracted from the related TAB_COLUMNS views.

DBA_TAB_MODIFICATIONS

ALL_TAB_MODIFICATIONS

USER_TAB_MODIFICATIONS

These views describe tables that have been modified since the last time table statistics were gathered on them. The views are populated only for tables with the MONITORING attribute. They are not populated immediately, but after a time lapse (usually 3 hours).

DBA_UNUSED_COL_TABS

ALL_UNUSED_COL_TABS

USER_UNUSED_COL_TABS

These views list tables with unused columns, as marked by the ALTER TABLE ... SET UNUSED statement.

DBA_PARTIAL_DROP_TABS

ALL_PARTIAL_DROP_TABS

USER_PARTIAL_DROP_TABS

These views list tables that have partially completed DROP COLUMN operations. These operations could be incomplete because the operation was interrupted by the user or a system crash.

See Also:

Go to previous page Go to next page
Oracle
Copyright © 2001, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback