8 Adding and Deleting Oracle RAC from Nodes on Linux and UNIX Systems

This chapter describes how to use the addNode.sh and rootdelete.sh scripts to extend an existing Oracle Real Application Clusters (Oracle RAC) home to other nodes and instances in the cluster, and delete Oracle RAC from nodes and instances in the cluster. This chapter provides instructions for Linux and UNIX systems.

If your goal is to clone an existing Oracle RAC home to create multiple new Oracle RAC installations across the cluster, then use the cloning procedures that are described in Chapter 5, "Using Cloning to Add ASM and Oracle RAC to Nodes in a New Cluster".

The topics in this chapter include the following:

Note:

The phrase "target node" as used in this chapter refers to the node to which you plan to extend Oracle RAC environment.

See Also:

Oracle Database 2 Day + Real Application Clusters Guide for additional information about configuring a new Oracle RAC cluster or scaling up an existing Oracle RAC cluster

Adding Oracle RAC to Nodes Running Clusterware and Oracle Database

Before beginning this procedure, ensure that your existing nodes have the correct path to the CRS_home and that the $ORACLE_HOME environment variables is set correctly.

To add Oracle RAC to nodes that already have Oracle Clusterware and Oracle Database software installed, you must configure the target nodes with the Oracle Database software that is on the existing nodes of the cluster. To do this, perform the following steps to run two versions of an Oracle Universal Installer process: one for the Oracle Clusterware and one for the database layer:

  1. Add Oracle RAC to target nodes at the Oracle Clusterware layer by running Oracle Universal Installer from the Oracle Clusterware home on an existing node, as follows:

    CRS_home/oui/bin/addNode.sh -noCopy 
    
  2. Add Oracle RAC to target nodes at the Oracle software layer by running Oracle Universal Installer from the Oracle home, as follows:

    Oracle_home/oui/bin/addNode.sh -noCopy 
    

In the -noCopy mode, Oracle Universal Installer performs all add node operations except for the copying of software to the target nodes.

Note:

Oracle recommends that you back up your voting disk and Oracle Cluster Registry (OCR) files after you complete the node addition process.

See Also:

"Extending ASM to Nodes Running Single-Instance or Oracle RAC Databases" for complete information about adding ASM instances to new nodes

Adding Oracle RAC to Nodes That Do Not Have Clusterware and Oracle Database

This section explains how to add nodes to clusters using detailed manual procedures. If the nodes to which you want to add Oracle RAC do not have clusterware or Oracle software installed on them, then complete the following steps to add Oracle RAC to the target nodes. The procedures in these steps assume that you already have an operative Linux or UNIX environment.

Otherwise, to add Oracle RAC to a node that is already configured with clusterware and Oracle software, follow the procedure described in "Adding Oracle RAC to Nodes Running Clusterware and Oracle Database".

This section contains the following topics:

Prerequisite Steps for Extending Oracle RAC to Target Nodes

The following steps describe how to set up target nodes to be part of your cluster:

Step 1   Make physical connections

Connect the target nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. See your hardware vendor documentation for details about this step.

Step 2   Install the operating system

Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. See your hardware vendor documentation for details about this process.

See Also:

Your platform-specific Oracle Real Application Clusters installation guide for procedures about using the Database Configuration Assistant (DBCA) to create and delete Oracle RAC databases
Step 3   Create Oracle users

Create Oracle users.

As root user, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.

Step 4   Verify the installation

Verify the installation with the Cluster Verification Utility (CVU) by performing the following steps:

  1. From the /bin directory in the CRS_home on the existing nodes, run the CVU command to verify your installation at the post hardware installation stage as shown in the following example, where node_list is a comma-delimited list of nodes you want in your cluster:

    cluvfy stage -post hwos -n node_list|all [-verbose]
    

    Note:

    You can only use the all option with the -n argument if you have set the CV_NODE_ALL variable to represent the list of nodes on which you want to perform the CVU operation.

    You can also use this CVU command to:

    • Verify the node is reachable, for example, to all of the nodes from the local node.

    • Verify user equivalence to all given nodes the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.

    See Also:

    "Using the Cluster Verification Utility" section in the Oracle Clusterware Administration and Deployment Guide
  2. From the /bin directory in the CRS_home on the existing nodes, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment where ref_node is a node in your existing cluster against which you want CVU to compare, for example, the target nodes that you specify with the comma-delimited list in node_list for the -n option, orainventory_group is the name of the Oracle inventory group, and osdba_group is the name of the OSDBA group:

    cluvfy comp peer [ -refnode ref_node ] -n node_list 
    [ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
    

    For the reference node, select a node from your existing cluster nodes against which you want CVU to compare, for example, the target nodes that you specify with the -n option.

    Note:

    For all of the add node and delete node procedures for Linux and UNIX systems, temporary directories such as /tmp, $TEMP, or $TMP, should not be shared directories. If your temporary directories are shared, then set your temporary environment variable, such as $TEMP, to a nonshared location on a local node. In addition, use a directory that exists on all of the nodes.
Step 5   Check the installation

To verify that your installation is configured correctly, perform the following steps:

  1. Ensure that the target nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures described in this chapter.

  2. If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing node. Make sure that you have at least 250 MB of free space on the same location on each of the target nodes to install Oracle Clusterware. In addition, ensure you have enough free space on each target node to install the Oracle binaries.

  3. Ensure that the OCR and the voting disk are accessible by the target nodes using the same path. In addition, the OCR and voting disk devices must have the same permissions as on the existing nodes.

  4. Verify user equivalence to and from an existing node to the target nodes using rsh or SSH on Linux and UNIX systems, or on Window systems make sure that you can run the following command from all of the existing nodes of your cluster where the hostname is the public network name of the target node:

    NET USE \\hostname\C$
    

    You have the required administrative privileges on each node if the operating system responds with:

    Command completed successfully.
    

After completing the procedures in this section, your target nodes are connected to the cluster and configured with the required software to make them visible to Oracle Clusterware.

Note:

Do not change a hostname after the Oracle Clusterware installation. This includes adding or deleting a domain qualification.

Extend Oracle Clusterware to Target Nodes

Extend an existing Oracle Clusterware home to the target nodes following the instructions in Oracle Clusterware Administration and Deployment Guide.

If you are using Oracle Clusterware without vendor clusterware, then you can add and delete Oracle Clusterware on nodes without stopping the existing nodes. If you are using Oracle Clusterware with vendor clusterware, then you can add nodes on some Linux and UNIX systems without stopping the existing nodes if your clusterware supports this. See your vendor-specific clusterware documentation for more information.

Note:

For systems using shared storage for the clusterware home, ensure that the existing clusterware is accessible by the target nodes. Also ensure that the target nodes can be brought online as part of the existing cluster.

Configure Shared Storage on Target Nodes

Use the information in this section to configure shared storage so that the target nodes can access the Oracle software, and so that the existing nodes can access the target nodes and instances. Then, use the procedure described in "Add the Oracle Real Application Clusters Database Homes to Target Nodes".

Note:

In some cases, your current configuration may not be compatible with an ASM activity that you are trying to perform, either explicitly or with an automatic ASM extension to other nodes. If you are using DBCA to build a database using a new Oracle home, and if the ASM version is from an earlier release of the Oracle software but does not exist on all of the nodes you selected for the database, then ASM cannot be extended. Instead, the DBCA session displays an error, prompting you either to run the add node script or to upgrade ASM using the DBUA.

To extend an existing Oracle RAC database to the target nodes, configure the same type of storage on the target nodes as you are using on the existing nodes in the Oracle RAC environment:

  • Automatic Storage Management (ASM)

    If the Oracle RAC database and ASM reside in the same Oracle home, you do not need to install ASM on the added node because the ASM instance is created implicitly upon node addition. If, however, ASM resides in its own home (as Oracle recommends), you must first extend the Oracle Clusterware home (CRS_home), ASM home, and then the Oracle home (in that order), in order to add the new node to the cluster.

    If you are using ASM for storage, ake sure that the target nodes can access the ASM disks with the same permissions as the existing nodes.

    See Also:

    "Extending ASM to Nodes Running Single-Instance or Oracle RAC Databases" for instructions about adding a new ASM instance to a node running either a single-instance database or an Oracle RAC database instance in a cluster
  • OCFS2

    If you are using OCFS2, then make sure that the target nodes can access the cluster file systems in the same way that the other nodes access them.

    Run the following command to verify your cluster file system and obtain detailed output where nodelist includes both the preexisting nodes and the target nodes and file system is the name of the file system that you used for the Oracle Cluster File System:

    cluvfy comp cfs -n nodelist -f file system [-verbose]
    

    See Also:

    Oracle Clusterware Administration and Deployment Guide for more information about enabling and using the CVU, and your platform-specific Oracle Clusterware installation guide for more information about the Oracle Cluster File System
  • Vendor cluster file systems

    If your cluster database uses vendor cluster file systems, then configure the target nodes to use the vendor cluster file systems. See the vendor clusterware documentation for the preinstallation steps for your Linux or UNIX platform.

  • Raw device storage

    If your cluster database uses raw devices, then prepare the raw devices on the target nodes, as follows:

    To prepare raw device storage on the target nodes, you need at least two new disk partitions to accommodate the redo logs for each new instance. Make these disk partitions the same size as the redo log partitions that you configured for the existing nodes' instances. Also create an additional logical partition for the undo tablespace for automatic undo management.

    On applicable operating systems, you can create symbolic links to your raw devices. Optionally, you can create a raw device mapping file and set the DBCA_RAW_CONFIG environment variable so that it points to the raw device mapping file. Use your vendor-supplied tools to configure the required raw storage.

    See Also:

    Your platform-specific Oracle Real Application Clusters installation guide for procedures about using DBCA to create and delete Oracle RAC databases

    Run the following command to verify that the prepared raw device storage is accessible from all of the configured cluster nodes where node_list includes both the pre-existing nodes and the newly added nodes and storageID_list is a comma-delimited list of storage identifiers:

    cluvfy comp ssa [ -n node_list ] [ -s storageID_list ] [-verbose]
    

    See Also:

    Oracle Clusterware Administration and Deployment Guide for more information about enabling and using the CVU

Add the Oracle Real Application Clusters Database Homes to Target Nodes

You can add the Oracle RAC database home to target nodes using either of the following methods:

See Also:

Oracle Universal Installer and OPatch User's Guide for more information about how to configure command-line response files and Oracle Database Net Services Administrator's Guide for more information about NETCA

Extending the Database Home to Target Nodes Using Oracle Universal Installer in Interactive Mode

To extend Oracle RAC to the target nodes, run Oracle Universal Installer in add node mode to configure the Oracle home on the target nodes. If you have multiple Oracle homes, then perform the following steps for each Oracle home that you want to include on the target nodes:

  1. Ensure that you have successfully installed Oracle Database with the Oracle RAC software on at least one node in your cluster environment.

  2. Ensure that the $ORACLE_HOME environment variable identifies the successfully installed Oracle home.

  3. Run the addNode.sh script

    On an existing node from the Oracle_home/oui/bin directory, run the addNode.sh script. This script starts Oracle Universal Installer in add node mode and displays the Oracle Universal Installer Welcome page. Click Next on the Welcome page and Oracle Universal Installer displays the Specify Cluster Nodes for Node Addition page.

  4. Verify the entries that Oracle Universal Installer displays.

    The Specify Cluster Nodes for Node Addition page has a table showing the existing nodes associated with the Oracle home from which you launched Oracle Universal Installer. A node selection table appears on the bottom of this page showing the nodes that are available for addition. Select the nodes that you want to add and click Next.

  5. Oracle Universal Installer verifies connectivity and performs availability checks on the existing nodes and on the nodes that you want to add. Some of checks performed determine whether:

    • The nodes are up

    • The nodes are accessible by way of the network

  6. If any of the checks fail, then fix the problem and proceed or deselect the node that has the error and proceed. You cannot deselect existing nodes; you must correct problems on the existing nodes before proceeding with node addition. If all of the checks succeed, then Oracle Universal Installer displays the Node Addition Summary page.

    Note:

    If any of the existing nodes are down, then perform the updateNodeList procedure on each of the nodes to fix the node list after the nodes are up. Run the following command where node_list is a comma-delimited list of all of the nodes on which Oracle RAC is deployed:
    oui/bin/runInstaller -updateNodeList 
    "CLUSTER_NODES={node_list}" -local
    
  7. The Node Addition Summary page has the following information about the products that are installed in the Oracle home that you are going to extend to the target nodes:

    • The source for the add node process, which in this case is the Oracle home

    • The existing nodes and target nodes

    • The target nodes that you selected

    • The required and available space on the target nodes

    • The installed products listing all of the products that are already installed in the existing Oracle home

    Click Finish and Oracle Universal Installer displays the Cluster Node Addition Progress page.

  8. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the four phases of the node addition process and each phases' status as follows:

    • Copy the Oracle home to the New Nodes—Copies the entire Oracle home from the local node to the target nodes unless the Oracle home is on a cluster file system

    • Save Cluster Inventory—Updates the node list associated with the Oracle home and its inventory

    • Run root.sh—Displays the dialog prompting you to run root.sh on the target nodes

    The Cluster Node Addition Progress page's Status column displays Succeeded if the phase completes, In Progress if the phase is in progress, and Suspended when the phase is pending execution. After Oracle Universal Installer displays the End of Node Addition page, click Exit to end the Oracle Universal Installer session.

  9. Run the root.sh script on all of the target nodes.

  10. On the target node, run the Net Configuration Assistant (NETCA) to add a listener.

    Add a listener to the target node by running NETCA from the target node and selecting only the target node on the Node Selection page.

You can now add database instances to the target nodes as described in "Add ASM and Oracle RAC Database Instances to Target Nodes".

Extending the Database Home to Target Nodes Using Oracle Universal Installer in Silent Mode

You can optionally run addNode.sh in silent mode, replacing steps 1 through 6, as follows where nodeI, nodeI+1, and so on are the target nodes to which you are adding the Oracle RAC database home.

  1. Ensure that you have successfully installed the Oracle Database with the Oracle RAC software on at least one node in your cluster environment.

  2. Ensure that the $ORACLE_HOME environment variable identifies the successfully installed Oracle home.

  3. Go to Oracle_home/oui/bin and run the addNode.sh script. In the following example, nodeI, nodeI+1 (and so on) are the nodes that you are adding:

    addNode.sh -silent "CLUSTER_NEW_NODES={nodeI, nodeI+1, … nodeI+n}" 
    

    You can also specify the variable=value entries in a response file, known as filename, and you can run the addNode script as follows:

    addNode.sh -silent -responseFile filename
    

    Command-line values always override response file values.

Add ASM and Oracle RAC Database Instances to Target Nodes

You can use either Oracle Enterprise Manager or DBCA to add Oracle RAC database instances to the target nodes. To add a database instance to a target node with Oracle Enterprise Manager, see the Oracle Database 2 Day + Real Application Clusters Guide for complete information.

This section describes using DBCA to add Oracle RAC database instances under the following topics:

These tools guide you through the following tasks:

  • Creating and starting an ASM instance (if the existing instances were using ASM) on each target node

  • Creating a new database instance on each target node

  • Creating and configuring high availability components

  • Creating the Oracle Net configuration

  • Starting the new instance

  • Creating and starting services if you entered services information on the Services Configuration page

After adding the instances to the target nodes, you should perform any necessary service configuration procedures, as described in Chapter 4.

Using DBCA in Interactive Mode to Add ASM and Database Instances to Target Nodes

To add a database instance to a target node with DBCA in interactive mode, perform the following steps:

  1. Ensure that your existing nodes have the $ORACLE_HOME environment variable set correctly.

  2. Start the DBCA by entering dbca at the system prompt from the bin directory in the Oracle_home directory.

    The DBCA displays the Welcome page for Oracle RAC. Click Help on any DBCA page for additional information.

  3. Select Oracle Real Application Clusters database, click Next, and DBCA displays the Operations page.

  4. Select Instance Management, click Next, and DBCA displays the Instance Management page.

  5. Select Add Instance and click Next. The DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE, or INACTIVE.

  6. From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Enter user name and password for the database user that has SYSDBA privileges. Click Next and DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the Oracle RAC database that you selected.

  7. Click Next to add a new instance and DBCA displays the Adding an Instance page.

  8. On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA provides does not match your existing instance naming scheme. Then select the target node name from the list, click Next, and DBCA displays the Services Page.

  9. Enter the services information for the target node's instance, click Next, and DBCA displays the Instance Storage page.

  10. If you are using raw devices or raw partitions, then on the Instance Storage page select the Tablespaces folder and expand it. Select the undo tablespace storage object and a dialog appears on the right-hand side. Change the default datafile name to the raw device name for the tablespace.

  11. If you are using raw devices or raw partitions or if you want to change the default redo log group file name, then on the Instance Storage page select and expand the Redo Log Groups folder. For each redo log group number that you select, DBCA displays another dialog box. Enter the raw device name that you created in the section "Configure Shared Storage on Target Nodes" in the File Name field.

  12. If you are using a cluster file system, then click Finish on the Instance Storage page. If you are using raw devices, then repeat step 11 for all of the other redo log groups, click Finish, and DBCA displays a Summary dialog.

  13. Review the information on the Summary dialog and click OK or click Cancel to end the instance addition operation. The DBCA displays a progress dialog showing DBCA performing the instance addition operation. When DBCA completes the instance addition operation, DBCA displays a dialog asking whether you want to perform another operation.

  14. After you terminate your DBCA session, run the following command to verify the administrative privileges on the target node and obtain detailed information about these privileges where nodelist consists of the target nodes:

    cluvfy comp admprv -o db_config -d oracle_home -n nodelist [-verbose]
    
  15. Perform any needed service configuration procedures, as described in Chapter 4, "Introduction to Automatic Workload Management".

Using DBCA in Silent Mode to Add ASM and Database Instances to Target Nodes

You can use the DBCA in silent mode to add instances to nodes on which you have extended an Oracle Clusterware home and an Oracle Database home. Use the following syntax where password is the password:

dbca -silent -addInstance -nodeList node -gdbName gdbname [-instanceName instname]
 -sysDBAUserName sysdba -sysDBAPassword password

Table 8-1 Variables in the DBCA Silent Mode Syntax

Variable Description

node

The node on which you want to add (or delete) the instance.

gdbname

Global database name.

instname

Name of the instance. Provide an instance name only if you want to override the Oracle naming convention for Oracle RAC instance names.

sysdba

Name of the Oracle user with SYSDBA privileges.

password

Password for the SYSDBA user.


Before you run the dbca command, ensure that you have set the $ORACLE_HOME environment variable correctly on the existing nodes.

Deleting Cluster Nodes from Oracle Real Application Clusters Environments

This section provides the following topics that explain the steps you perform to delete nodes from clusters in an Oracle RAC environment:

Step 1: Delete Instances from Oracle Real Application Clusters Databases

Note:

Before deleting an instance from an Oracle RAC database, use either SRVCTL or Oracle Enterprise Manager to do the following:
  • If you have services configured, relocate the services

  • Modify the services so that each service can run on one of the remaining instances

  • Set "not used" for each service running on the instance that is to be deleted

The procedures in this section explain how to use DBCA in interactive or silent mode, to delete an instance from an Oracle RAC database. To delete a database instance from a target node with Oracle Enterprise Manager, see the Oracle Database 2 Day + Real Application Clusters Guide.

This section includes the following topics:

Note:

If the instance uses ASM, and ASM is installed in a separate Oracle home, then using these procedures will not remove the ASM instance from the specified node.

Using DBCA in Interactive Mode to Delete Instances from Nodes

To delete an instance using DBCA in interactive mode, perform the following steps:

  1. Verify there is a current backup of the OCR.

    Run the ocrconfig -showbackup command to ensure there is a valid backup.

  2. Start DBCA.

    Start DBCA on a node other than the node that hosts the instance that you want to delete. The database and the instance that you plan to delete should continue to be started and running during this step.

  3. On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next. DBCA displays the Operations page.

  4. On the DBCA Operations page, select Instance Management and click Next. DBCA displays the Instance Management page.

  5. On the DBCA Instance Management page, select the instance to be deleted, select Delete Instance, and click Next.

  6. On the List of Cluster Databases page, select the Oracle RAC database from which to delete the instance, as follows:

    1. On the List of Cluster Database Instances page, DBCA displays the instances that are associated with the Oracle RAC database that you selected and the status of each instance. Select the cluster database from which you will delete the instance.

    2. Enter a user name and password for the database user that has SYSDBA privileges. Click Next.

    3. Click OK on the Confirmation dialog to proceed to delete the instance.

      DBCA displays a progress dialog showing that DBCA is deleting the instance. During this operation, DBCA removes the instance and the instance's Oracle Net configuration. When DBCA completes this operation, DBCA displays a dialog asking whether you want to perform another operation.

      Click No and exit DBCA or click Yes to perform another operation. If you click Yes, then DBCA displays the Operations page.

  7. Verify that the dropped instance's redo thread has been removed by querying the V$LOG view. If the redo thread is not disabled, then disable the thread. For example:

    SQL> ALTER DATABASE DISABLE THREAD 2;
    
  8. Verify that the instance has been removed from the OCR by issuing the following commands:

    srvctl config database -d database_name
    cd CRS_HOME/bin
    ./crs_stat
    
  9. If this node had an ASM instance and the node will no longer be a part of the cluster, you must remove the ASM instance by issuing the following commands:

    srvctl stop asm -n node_name
    srvctl remove asm -n node_name
    

    Verify that ASM has been removed by issuing the following command:

    srvctl config asm -n node_name
    
  10. If you are deleting more than one node, then repeat these steps to delete the instances from all the nodes that you are going to delete.

Using DBCA in Silent Mode to Delete Instances from Nodes

You can use DBCA in silent mode to delete a database instance from a node.

Run the following command, where the variables are the same as those shown in Table 8-1 for the DBCA command to add an instance. Provide a node name only if you are deleting an instance from a node other than the one on where DBCA is running as shown in the following example where password is the password:

dbca -silent -deleteInstance [-nodeList node] -gdbName gdbname -instanceName
 instname -sysDBAUserName sysdba -sysDBAPassword password

At this point, you have accomplished the following:

  • Deregistered the selected instance from its associated Oracle Net Services listeners

  • Deleted the selected database instance from the instance's configured node

  • Removed the Oracle Net configuration

  • Deleted the Oracle Flexible Architecture directory structure from the instance's configured node.

Step 2: Delete Nodes from the Cluster

Once you have deleted the instance, you can begin the process of deleting the node from the cluster. You accomplish this by running scripts on the node you want to delete to remove the Oracle Clusterware installation and you run scripts on the remaining nodes to update the node list.

The following steps assume that the node to be removed (node2 in this discussion) is still functioning. Before beginning these procedures, ensure that the $ORACLE_HOME environment variable is set correctly on the existing nodes.

Use the following procedures to delete nodes from Oracle clusters on Linux or UNIX systems:

  1. Stop the node applications on the node you are deleting.

    As the root user, run the srvctl stop nodeapps command to stop and remove the nodeapps on the node you are deleting. For example:

    # srvctl stop nodeapps -n nodename
    
  2. Remove the listener from the node.

    If this is the Oracle home from which the node-specific listener named LISTENER_nodename runs, then use NETCA to remove this listener. If necessary, re-create this listener in another home. Invoke NETCA and proceed as follows:

    1. Choose Cluster Configuration.

    2. Select only the node you are removing and click Next.

    3. Choose Listener Configuration and click Next.

    4. Choose Delete and delete any listeners configured on the node you are removing.

    See Also:

    Oracle Database Net Services Administrator's Guide for more information about NETCA
  3. To delete a node that has Oracle Configuration Manager (OCM) configured, run the following commands:

    $ORACLE_HOME/ccr/bin/deployPackages -d $ORACLE_HOME/ccr/inventory/core.jar
    rm –rf $ORACLE_HOME/ccr
    

    Depending on whether you have a shared or nonshared Oracle home, complete one of the following two procedures:

    • For a shared home:

      To delete a node from a cluster:

      ./runInstaller -detachHome -local ORACLE_HOME=Oracle_home
      
    • For a nonshared home:

      To delete a node from a cluster:

      ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home CLUSTER_NODES="" –local
      

    The runInstaller command is located in the Oracle_home/oui/bin directory. Using this command does not launch an installer GUI.

  4. To deinstall the Oracle home from the node you are deleting, run the following command from the Oracle_home/oui/bin directory:

    ./runInstaller -deinstall -silent "REMOVE_HOMES={Oracle_home}" -local
    
  5. Verify that all database resources are running on nodes that you are not deleting.

    1. Run the crs_stat command from the CRS_HOME/bin directory to verify the other nodes. The following example shows the statistics for the database db_name on the node node2:

      NAME=ora.db_name.db
      TYPE=application
      TARGET=ONLINE
      STATE=ONLINE on node2
      
    2. Ensure that the database resource is not running on a node you are deleting. Run the crs_relocate command from the CRS_HOME/bin directory to perform this as shown in the example:

      crs_relocate ora.db_name.db
      
  6. Remove the node applications on the node you are deleting.

    As the root user, run the following command:

    # srvctl remove nodeapps -n nodename
    
  7. Update the node list on the remaining nodes in the cluster.

    1. Run the following command to set the display environment:

      DISPLAY=ipaddress:0.0; export DISPLAY
      
    2. Define the database Oracle homes ($ORACLE_HOME) in the Oracle inventory for the nodes that are still in the cluster. If there is no $ORACLE_HOME, you can skip this step.

      As the ORACLE user, run the installer with the updateNodeList option on any remaining nodes in the cluster, and include a comma-delimited list of nodes that remain in the cluster:

      $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=Oracle_Home CLUSTER_NODES=node1,node3,node4
      
  8. Stop Oracle Clusterware on the node you are deleting.

    As the root user on the node that you are deleting, run the rootdelete.sh script in the CRS_home/install/ directory to disable the Oracle Clusterware applications that are on the node. For example:

    # cd CRS_home/install
    # ./rootdelete.sh local nosharedvar nosharedhome
    

    Only run this command once and use the nosharedhome argument if you are using a local file system. The nosharedvar option assumes the OCR.LOC file is not on a shared file system. The default for this command is sharedhome that prevents you from updating the permissions of local files such that they can be removed by the oracle user. If the ocr.loc file is on a shared file system, then run the CRS_home/install/rootdelete.sh remote sharedvar command.

    If you are deleting more than one node from your cluster, then repeat this step on each node that you are deleting.

  9. Delete the node and remove it from the OCR.

    As the root user on any node that you are not deleting:

    1. To determine the node number of any node, run the olsnodes -n command from the CRS_home/bin directory. For example:

      # olsnodes -n
      node1 1
      node2 2
      
    2. Run the rootdeletenode.sh script from the CRS_home/install/ directory. The rootdeletenode.sh script calls the clscfg -delete script deletes the node from the Oracle cluster and updates the OCR.

      The following example removes only one node, node2:

      # cd CRS_home/install
      # ./rootdeletenode.sh node2,2
      

      To delete only one node, enter the node name and number of the node that you want to delete with the command CRS_home/install/rootdelete.sh node1,node1-number. To delete multiple nodes, run the command CRS_home/install/rootdelete.sh node1,node1-number,node2,node2-number,... nodeN,nodeN-number, where node1 through nodeN is a list of the nodes that you want to delete, and node1-number through nodeN-number represents the node number.

      If you do not perform this step, the olsnodes command will continue to display the deleted node as a part of the cluster.

    3. Confirm that the node has been deleted by issuing the olsnodes command:

      ./olsnodes -n
      node1 1
      
  10. Define the CRS home in the Oracle inventory for the nodes that are still in the cluster.

    As the ORACLE user, perform the following steps:

    1. Run the following command to set up the display environment:

      DISPLAY=ipaddress:0.0; export DISPLAY
      
    2. Run the runInstaller command from the CRS home and specify a comma-delimited list of nodes that remain in the cluster:

      From the CRS home run the installer with the updateNodeList option on any remaining nodes in the cluster, and include a comma-delimited list of nodes that remain in the cluster:

      CRS_home/oui/bin/runInstaller -updateNodeList ORACLE_HOME=CRS_home 
      CLUSTER_NODES=node1, node3, node4 CRS=TRUE
      
  11. Delete the Oracle home and the CRS home from the deleted node.

    Once the node updates are done, you must manually delete the Oracle home and CRS home from the node that you have deleted. Note: if either of these home directories is located on a shared file system, then skip this step.

    1. In the Oracle home directory, run the following command:

      $ORACLE_HOME: rm -rf *
      
    2. In the CRS home directory, run the following command:

      $CRS_HOME : rm -rf *
      
  12. Ensure that all initialization scripts and soft links are removed from the deleted node.

    For example, as the root user on a Linux system, run the following commands:

    rm -f /etc/init.d/init.cssd
    rm -f /etc/init.d/init.crs 
    rm -f /etc/init.d/init.crsd 
    rm -f /etc/init.d/init.evmd 
    rm -f /etc/rc2.d/K96init.crs
    rm -f /etc/rc2.d/S96init.crs
    rm -f /etc/rc3.d/K96init.crs
    rm -f /etc/rc3.d/S96init.crs
    rm -f /etc/rc5.d/K96init.crs
    rm -f /etc/rc5.d/S96init.crs
    rm -Rf /etc/oracle/scls_scr
    

    Optionally, you can also remove the /etc/oracle directory, the /etc/oratab file, and the Oracle inventory from the deleted node.

  13. Optionally, remove additional Oracle homes, ASM homes, or Oracle Enterprise Manager homes (if used), from the Oracle inventory on all of the remaining nodes.

    On all remaining nodes, run the installer to update the node list. The following example assumes that you have deleted node2 from the cluster:

    ./runInstaller -updateNodeList -local \ORACLE_HOME=$ORACLE_HOME 
    CLUSTER_NODES=node1,node3,node4
    
  14. Verify that you have deleted the node from the cluster.

    Run the following command to verify that the node is no longer a member of the cluster and to verify that the Oracle Clusterware components have been removed from this node:

    cluvfy comp crs -n all [-verbose]
    

    The response from this command should not contain any information about the node that you deleted; the deleted node should no longer have the Oracle Clusterware components on it.

    See Also:

    Oracle Clusterware Administration and Deployment Guide for more information about enabling and using the CVU