4 Adding and Deleting Oracle Clusterware Homes

This chapter describes how to use the addNode.sh and rootdeletenode.sh scripts to copy the Oracle Clusterware home from an existing Oracle Clusterware home to other nodes. This chapter provides instructions for Linux and UNIX systems, and Windows systems.

You should use the add node procedures described in this chapter to add or delete Oracle Clusterware from nodes in the cluster. If your goal is create new clusters or extend Oracle Clusterware to more nodes in the same cluster, then use the cloning procedures that are described in Chapter 3, "Cloning Oracle Clusterware".

The topics in this chapter include the following:

Prerequisite Steps for Adding Oracle Clusterware

The following steps assume that you already have an operative Linux or UNIX environment.

Complete the following steps to prepare the new nodes in the cluster:

  1. Make physical connections

    Connect the new nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. See your hardware vendor documentation for details about this step.

  2. Install the operating system

    Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. See your hardware vendor documentation for details about this process.

  3. Create Oracle users.

    Note:

    Perform this step only for Linux and UNIX systems. For Windows systems, skip to 4.

    As root user, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.

  4. Verify the installation with the Cluster Verification Utility (CVU) using the following steps:

    1. From the /bin directory in the CRS_home on the existing nodes, run the CVU command to verify your installation at the post hardware installation stage as shown in the following example, where node_list is a comma-delimited list of nodes you want in your cluster:

      cluvfy stage -post hwos -n node_list|all [-verbose]
      

      This command causes CVU to verify your hardware and operating system environment at the post-hardware setup stage. After you have configured the hardware and operating systems on the new nodes, you can use this command to verify the node is reachable, for example, to all of the nodes from the local node. You can also use this command to verify user equivalence to all given nodes the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.

      Note:

      You can only use the all option with the -n argument if you have set the CV_NODE_ALL variable to represent the list of nodes on which you want to perform the CVU operation.
    2. From the /bin directory in the CRS_home on the existing nodes, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment where ref_node is a node in your existing cluster against which you want CVU to compare, for example, the newly added nodes that you specify with the comma-delimited list in node_list for the -n option, orainventory_group is the name of the Oracle inventory group, and osdba_group is the name of the OSDBA group:

      cluvfy comp peer [ -refnode ref_node ] -n node_list
      [ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
      

    Note:

    For the reference node, select a node from your existing cluster nodes against which you want CVU to compare, for example, the newly added nodes that you specify with the -n option.
  5. Check the installation

    To verify that your installation is configured correctly, perform the following steps:

    1. Ensure that the new nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures described in this chapter.

    2. If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing nodes. Make sure that you have at least 250MB of free space on the same location on each of the new nodes to install Oracle Clusterware. In addition, ensure you have enough free space on each new node to install the Oracle binaries.

    3. Ensure that the Oracle Cluster Registry (OCR) and the voting disk are accessible by the new nodes using the same path as the other nodes use. In addition, the OCR and voting disk devices must have the same permissions as on the existing nodes.

    4. Verify user equivalence to and from an existing node to the new nodes using rsh or ssh on Linux and UNIX systems, or on Window systems make sure that you can run the following command from all of the existing nodes of your cluster where the hostname is the public network name of the new node:

      NET USE \\hostname\C$
      

      You have the required administrative privileges on each node if the operating system responds with:

      Command completed successfully.
      

    After completing the procedures in this section, your new nodes are connected to the cluster and configured with the required software to make them visible to Oracle Clusterware.

    Note:

    Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.

Adding and Deleting Oracle Clusterware Homes on Linux and UNIX Systems

This section explains Oracle Clusterware home addition and deletion on Linux and UNIX systems and it assumes that you have already performed the steps in the "Prerequisite Steps for Adding Oracle Clusterware" section.

For node addition, ensure that you install the required operating system patches and updates on the new nodes. Then configure the new nodes to be part of your cluster at the network level. Use the instructions in this section to extend the Oracle Clusterware home from an existing Oracle Clusterware home to the new nodes

Finally, you can optionally extend the Oracle database software with Oracle Real Application Clusters (Oracle RAC) components to the new nodes and make the new nodes members of the existing Oracle RAC database. See the node addition procedures described in Oracle Real Application Clusters Administration and Deployment Guide.

This section includes the following topics:

Adding an Oracle Clusterware Home to a New Node On Linux or UNIX Systems

This section describes how to use Oracle Universal Installer to add an Oracle Clusterware home to a node in your cluster. This documentation assumes:

  • There is an existing cluster that has a node named node1

  • You are adding Oracle Clusterware from node2

  • You have already successfully installed Oracle Clusterware on node1 in a nonshared home, where CRS_home represents the successfully installed home

You can use either of the following procedures to add an Oracle Clusterware home to a node:

Adding an Oracle Clusterware Home to a New Node Using Oracle Universal Installer in Interactive Mode

This procedure assumes that you have performed the tasks outlined in "Prerequisite Steps for Adding Oracle Clusterware". Oracle Universal Installer requires access to the private interconnect that you verified as part of the installation validation in Step 1. If Oracle Universal Installer cannot make the required connections, you will not be able to complete the following steps to add Oracle Clusterware to other nodes.

Note:

Instead of performing the first six steps of this procedure, you can alternatively run the addNode.sh script in silent mode as described "Adding an Oracle Clusterware Home to a New Node Using Oracle Universal Installer in Silent Mode".
  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. Also, for these procedures to complete successfully, you must ensure that CRS_home identifies your successfully installed Oracle Clusterware home.

  2. Start Oracle Universal Installer:

    Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing nodes. Oracle Universal Installer runs in add node mode and the Welcome page displays. Click Next and the Specify Cluster Nodes for Node Addition page displays.

  3. Oracle Universal Installer displays the Node Selection Page on which you should enter the node or nodes that you want to add and click Next.

    The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes, the private node names, and the virtual IP (VIP) addresses that are associated with Oracle Clusterware. Use the lower table to enter the public, private node names and the virtual hostnames of the new nodes.

  4. Verify the entries that Oracle Universal Installer displays on the Summary Page and click Next.

    If any verifications fail, Oracle Universal Installer redisplays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your cluster before you can proceed with node addition. If all the checks succeed, Oracle Universal Installer displays the Node Addition Summary page.

  5. The Node Addition Summary page displays the following information showing the products that are installed in the Oracle Clusterware home that you are extending to the new nodes:

    • The source for the add node process, which in this case is the Oracle Clusterware home

    • The private node names that you entered for the new nodes

    • The new nodes that you entered

    • The required and available space on the new nodes

    • The installed products listing the products that are already installed on the existing Oracle Clusterware home

    Click Next and Oracle Universal Installer displays the Cluster Node Addition Progress page.

  6. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the four phases of the node addition process and the phases' statuses as follows:

    • Instantiate Root Scripts—Instantiates rootaddNode.sh with the public nodes, private node names, and virtual hostnames that you entered on the Cluster Node Addition page.

    • Copy the Oracle Clusterware home to the New Nodes—Copies the Oracle Clusterware home to the new nodes unless the Oracle Clusterware home is on a cluster file system.

    • Save Cluster Inventory—Updates the node list associated with the Oracle Clusterware home and its inventory.

    • Run rootaddNode.sh and root.sh—Displays a dialog prompting you to run the rootaddNode.sh scriptFoot 1  from the local node (the node on which you are running Oracle Universal Installer) and to run the root.sh scriptFoot 2  on the new nodes. If Oracle Universal Installer detects that the new nodes do not have an inventory location, Oracle Universal Installer instructs you to run the orainstRoot.sh scriptFoot 3  on those nodes. The central inventory location is the same as that of the local node. The addNodeActionstimestamp.log file, where timestamp shows the session start date and time, contains information about which scripts you need to run and on which nodes you need to run them.

    The Cluster Node Addition Progress page's Status column displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit. After Oracle Universal Installer displays the End of Node Addition page, click Exit.

  7. From the CRS_home/install directory on an existing node, run the Oracle Notification Service configuration utility (ONSCONFIG) as in the following example where remote_port is the default port number 6251 (or another free port if port 6251 is unavailable) and node2 is the name of the node that you are adding:

    ./onsconfig add_config node2:remote_port
    
  8. Check that your cluster is integrated and that the cluster is not divided into partitions by completing the following operations:

    • Run the following CVU command to obtain detailed output for verifying cluster manager integrity on all of the nodes that are part of your Oracle RAC environment:

      cluvfy comp clumgr -n all [-verbose]
      
    • Use the following CVU command to obtain detailed output for verifying cluster integrity on all of the nodes that are part of your Oracle RAC environment:

      cluvfy comp clu [-verbose]
      
    • Use the following command to perform an integrated validation of the Oracle Clusterware setup on all of the configured nodes, both the preexisting nodes and the nodes that you have added:

      cluvfy stage -post crsinst -n all [-verbose]
      

    See Also:

    Oracle Clusterware Administration and Deployment Guide for more information about enabling and using the CVU

Adding an Oracle Clusterware Home to a New Node Using Oracle Universal Installer in Silent Mode

  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, the CRS_home must identify your successfully installed Oracle Clusterware home.

  2. Go to CRS_home/oui/bin and run the addNode.sh script using the following syntax where node2 is the name of the new node that you are adding, node2-priv is the private node name for the new node, and node2-vip is the VIP name for the new node:

    ./addNode.sh –silent "CLUSTER_NEW_NODES={node2}" 
    "CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}" 
    "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}" 
    

    Alternatively, you can specify the entries shown in Example 4-1 in a response file and run the addNode script as follows:

    addNode.sh -silent -responseFile filename
    

    Example 4-1 Response File Entries for Adding Oracle Clusterware Home

    CLUSTER_NEW_NODES = {"newnode1","newnode2"}
    CLUSTER_NEW_PRIVATE_NODE_NAMES = {"newnode1-priv","newnode2-priv"}
    CLUSTER_NEW_VIRTUAL_HOSTNAMES = {"newnode1-vip","newnode2-vip"}
    

    See Also:

    Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files

    Note:

    Command-line values always override response file values.
  3. Perform 7 and 8 in the "Adding an Oracle Clusterware Home to a New Node Using Oracle Universal Installer in Interactive Mode" section.

Deleting an Oracle Clusterware Home from a Linux or UNIX System

The procedures for deleting an Oracle Clusterware home assume that you have successfully installed the Oracle Clusterware on the node from which you want to delete the Oracle Clusterware home.

Note:

Oracle recommends that you back up your voting disk and OCR files after you complete the node deletion process.

This section includes the following topics:

Deleting an Oracle Clusterware Home Using Oracle Universal Installer in Interactive Mode

Use the following steps to remove Oracle Clusterware from a cluster node.

Step 1   Verify the location of the Oracle Clusterware home

Ensure that CRS_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where CRS_home is the location of the installed Oracle Clusterware software.

Step 2   Remove the stored network configuration

Note:

This step is unnecessary if you ran the Oracle Interface Configuration Tool (OIFCFG) with the -global option during the installation, such as with Oracle Universal Installer.

From a node that is going to remain in your cluster, in the CRS_home/bin directory, run the following command where node2 is the name of the node that you are deleting:

./oifcfg delif –node node2
Step 3   Obtain the remote port number

Create a dump of the OCR by running the ocrdump dump_file_name command. Open the dump file and search for ONS_HOSTS.hostName.PORT. The subsequent line contains the remote port number.

Step 4   Remove the ONS daemon configuration

From CRS_home/bin on a node that is going to remain in the cluster, run the Oracle Notification Service Utility (RACGONS). In the following example, the remote_port variable represents the ONS remote port number that you obtained in step 3 and node2 is the name of the node that you are deleting:

./racgons remove_config node2:remote_port
Step 5   Disable the Oracle Clusterware applications

On the node to be deleted, run the rootdelete.sh script as the root user from the CRS_home/install directory to disable the Oracle Clusterware applications and daemons running on the node. If you are deleting Oracle Clusterware from more than one node, then perform this step on each node that you are deleting.

Step 6   Delete the node and update the cluster registry

From any node that you are not deleting, issue the following command from the CRS_home/install directory as the root user to delete the node from the cluster and to update the Oracle Cluster Registry (OCR). In the following command, the variable node2,node2-number represents the node and the node number that you want to delete:

./rootdeletenode.sh node2,node2-number

If necessary, identify the node number using the following command on the node that you are deleting:

CRS_home/bin/olsnodes -n
Step 7   Remove the node from the node list

On the node that is to be deleted, run the following command from the CRS_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:

./runInstaller -updateNodeList ORACLE_HOME=CRS_home 
"CLUSTER_NODES={node_to_be_deleted}" 
CRS=TRUE -local
Step 8   Detach or deinstall the Oracle Clusterware software

On the node that you are deleting, run Oracle Universal Installer using the runInstaller command from the CRS_home/oui/bin directory. Depending on whether you have a shared or nonshared Oracle home, complete one of the following procedures:

  • If you have a shared home, then on any node other than the node to be deleted, run the following command from the CRS_home/oui/bin directory:

    ./runInstaller -detachHome  ORACLE_HOME=CRS_home
    
  • For a nonshared home, deinstall the Oracle Clusterware home from the node that you are deleting using Oracle Universal Installer as follows by issuing the following command from the Oracle_home/oui/bin directory, where CRS_home is the name defined for the Oracle Clusterware home:

    ./runInstaller -deinstall "REMOVE_HOMES={CRS_home}"
    
Step 9   Update the node list on the remaining nodes

On any node other than the node you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

./runInstaller -updateNodeList ORACLE_HOME=CRS_home 
"CLUSTER_NODES={remaining_nodes_list}" 
CRS=TRUE

Deleting an Oracle Clusterware Home from a Node that Cannot Be Accessed

Sometimes a node cannot be accessed for various reasons, such as the local disks have been corrupted, the server has failed, or the server has been physically removed. The definition of the failed node must still be removed from the repository.

To delete a node definition from from the repository, perform the following steps on a healthy node in the cluster:

Step 1   Remove OIFCFG information for the failed node

Run the following command, as root:

# oifcfg getif

If the command returns global, such as eth0 123.456.0.0 global public, go on to Step 2. Otherwise, run the following command, as root:

# oifcfg delif -node name_of_node_to_be_deleted
Step 2   Remove ONS information

Run the following command, as root, to identify the local port number to use in this procedure:

# cat CRS_home/opmn/conf/ons.config

Run the following command, as root, to remove the ONS information from the failed node:

# CRS_home/bin/racgons remove_config name_of_node_to_be_deleted:local_port_number
Step 3   Remove resources

Resources defined on the failed node must be removed. Resources include the database, the instance, and ASM and can be identified by running the crs_stat -t command. Once identified, use the srvctl remove command to remove the resources.

See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about the srvctl remove command
Step 4   Run rootdeletenode.sh

If you do not know the name of the node that you are trying to delete, run the following command, as root:

# CRS_home/bin/olsnodes -p

Otherwise, run the following command, as root:

# CRS_home/install/rootdeletenode.sh name_of_node_to_be_deleted
Step 5   Update the inventory

Run the following command, as owner of the CRS_home, with a comma-seperated list of node names that are going to remain in the cluster as the value for the CLUSTER_NODES parameter:

$ CRS_home/oui/bin/runInstaller -updateNodeList ORACLE_HOME=CRS_HOME "CLUSTER_NODES=remaining_node1, remaining_node2,..." CRS=TRUE

Adding and Deleting Oracle Clusterware Homes on Windows Systems

This section explains Oracle Clusterware home additions and deletions on Windows systems.

Note:

Do not use the procedures described in this section to extend Oracle Clusterware in configurations where the Oracle database has been upgraded from Oracle Database 10g Release 1 (10.1) on Windows systems. In this scenario, you must use the cloning procedure described in Chapter 3, "Cloning Oracle Clusterware" to extend Oracle Clusterware to additional nodes in the cluster.

For node additions, ensure that you install the required operating system patches and updates on the new nodes. Then configure the new nodes to be part of your cluster at the network level. Use the instructions in the following sections to extend the Oracle Clusterware home from an existing Oracle Clusterware home to the new nodes

Finally, you can optionally extend the Oracle database software with Oracle RAC components to the new nodes and make the new nodes members of the existing Oracle RAC database. See the node addition procedures described in Oracle Real Application Clusters Administration and Deployment Guide.

This section includes the following topics:

Adding an Oracle Clusterware Home to a Windows System

This section describes how to add new nodes to Oracle Clusterware using Oracle Universal Installer. Oracle Universal Installer requires access to the private interconnect that you checked in the "Prerequisite Steps for Adding Oracle Clusterware" section.

Perform the following steps:

  1. On one of the existing nodes go to the CRS_home\oui\bin directory and run the addnode.bat script to start Oracle Universal Installer.

  2. Oracle Universal Installer runs in the add node mode and the Welcome page displays. Click Next and the Specify Cluster Nodes for Node Addition page displays.

  3. The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes, the private node names and the virtual IP (VIP) addresses that are associated with Oracle Clusterware. Use the lower table to enter the public, private node names and the virtual hostnames of the new nodes.

  4. Click Next and Oracle Universal Installer verifies connectivity on the existing nodes and on the new nodes. The verifications that Oracle Universal Installer performs include determining whether:

    • The nodes are up

    • The nodes are accessible by way of the network

      Note:

      If any of the existing nodes are down, then you can proceed with the procedure. However, once the nodes are up, you must run the following command on each of those nodes:
      setup.exe -updateNodeList -local 
      "CLUSTER_NODES={available_node_list}"
      ORACLE_HOME=CRS_home
      

      This operation should be run from the CRS_home\oui\bin directory and the available_node_list value is a comma-delimited list of all of nodes currently in the cluster and CRS_home defines the Oracle Clusterware home directory.

    • The virtual hostnames are not already in use on the network

    • The user has write permission to create the Oracle Clusterware home on the new nodes

    • The user has write permission to the Oracle Universal Installer inventory in the C:\Program Files\Oracle\Inventory directory

  5. If Oracle Universal Installer detects that the new nodes do not have an inventory location, then:

    • The Oracle Universal Installer automatically updates the inventory location in the Registry key

    If any verifications fail, Oracle Universal Installer redisplays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your cluster before you can proceed with node addition. If all of the checks succeed, Oracle Universal Installer displays the Node Addition Summary page.

  6. The Node Addition Summary page displays the following information showing the products that are installed in the Oracle Clusterware home that you are extending to the new nodes:

    • The source for the add node process, which in this case is the Oracle Clusterware home

    • The private node names that you entered for the new nodes

    • The new nodes that you entered

    • The required and available space on the new nodes

    • The installed products listing the products that are already installed on the existing the Oracle Clusterware home

    Click Next and Oracle Universal Installer displays the Cluster Node Addition Progress page.

  7. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the phase of the node addition process and the phase's status according to the following:

    This page shows the following three Oracle Universal Installer phases:

    • Copy the Oracle Clusterware Home to New Nodes—Copies the Oracle Clusterware home to the new nodes unless the Oracle Clusterware home is on the Oracle Cluster File System.

    • Perform Oracle Home Setup—Updates the Registry entries for the new nodes, creates the services, and creates folder entries.

    • Save Cluster Inventory—Updates the node list associated with the Oracle Clusterware home and its inventory.

    The Cluster Node Addition Progress page's Status column displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit. After Oracle Universal Installer displays the End of Node Addition page, click Exit.

  8. From CRS_home\install on node1, run the crssetup.add.bat script.

  9. Use the following command to perform an integrated validation of the Oracle Clusterware setup on all of the configured nodes, both the preexisting nodes and the nodes that you have added

    cluvfy stage -post crsinst -n all [-verbose]
    

The CVU -post crsinst stage check verifies the integrity of the Oracle Clusterware components. After you have completed the procedures in this section for adding nodes at the Oracle Clusterware layer, you have successfully extended the Oracle Clusterware home from your existing the Oracle Clusterware home to the new nodes. Proceed to Step 3 to prepare the storage for Oracle RAC on the new nodes.

You can optionally run addnode.bat in silent mode, replacing steps 1 through 6 as follows, where nodeI, nodeI+1, and so on are the new nodes that you are adding:

addnode.bat -silent "CLUSTER_NEW_NODES={nodeI,nodeI+1,…nodeI+n}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node-privI,node-privI+1,…node-privI+n}" 
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={node-vipI,node-vipI+1,…,node-vipI+n}"

You can alternatively specify the entries shown in Example 4-2 in a response file and run addnode as follows:

addnode.bat -silent -responseFile filename

Example 4-2 Response File Entries for Adding Oracle Clusterware Home

CLUSTER_NEW_NODES = {"newnode1","newnode2"}
CLUSTER_NEW_PRIVATE_NODE_NAMES = {"newnode1-priv","newnode2-priv"}
CLUSTER_NEW_VIRTUAL_HOSTNAMES = {"newnode1-vip","newnode2-vip"}

Command-line values always override response file values.

See Also:

Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files

Perform 8 and 9 on the local node, or on the node on which you were performing this procedure.

After you have completed the procedures in this section for adding nodes at the Oracle Clusterware layer, you have successfully extended the Oracle Clusterware home from your existing the Oracle Clusterware home to additional nodes.

Deleting an Oracle Clusterware Home from a Windows System

This section describes how to delete Oracle Clusterware from Windows systems. These procedures assume that an Oracle Clusterware home is installed on node1 and node2, and that you want to delete node2 from the cluster.

This section contains the following topics:

Note:

Oracle recommends that you back up your voting disk and Oracle Cluster Registry files after you complete any node addition or deletion procedures.

Deleting an Oracle Clusterware Home Using Oracle Universal Installer in Interactive Mode

Perform the following procedure to use Oracle Universal Installer to delete the Oracle Clusterware home from a node:

  1. Perform the delete node operation for database homes as described in Oracle Real Application Clusters Administration and Deployment Guide.

  2. Note:

    This step is unnecessary if you ran the Oracle Interface Configuration Tool (OIFCFG) with the -global option during the installation, such as with Oracle Universal Installer.

    From node1, in the CRS_home\bin directory, run the following command where node2 is the name of the node that you are deleting:

    oifcfg delif –node node2
    
  3. From node1 in the CRS_home\bin directory run the following command where remote_port is the ONS remote port number:

    racgons remove_config node2:remote_port
    

    To determine the remote port number, from %CRS_HOME%\bin, create an OCR dump by running ocrdump dump_file_name. Open the dump file and search for ONS_HOSTS.hostName.PORT. The subsequent line contains the remote port number for ONS. For example:

    ORATEXT : 6251
    

    By default the remote port is 6251.

  4. Run srvctl to stop and remove the nodeapps from node 2. From CRS_home/bin run the following commands:

    srvctl stop nodeapps –n node2
    srvctl remove nodeapps –n node2
    
  5. On node1, or on any node that is not being deleted, run the following command from CRS_home\bin where node_name is the node to be deleted and node_number is the node's number as obtained from the output from the olsnodes -n command:

    crssetup del –nn node_name,node_number
    
  6. On each node that you want to delete (node2 in this case), run the following command from CRS_home\oui\bin:

    setup.exe –updateNodeList ORACLE_HOME=CRS_home 
    "CLUSTER_NODES={node2}" CRS=TRUE –local
    
  7. On node2, using Oracle Universal Installer setup.exe from CRS_home\oui\bin:

    • If you do not have a shared home, then deinstall the Oracle Clusterware installation by running the setup.exe script from CRS_home\oui\bin.

    • If you have a shared home, then do not perform a deinstallation. Instead, run the following command from CRS_home\oui\bin to detach the Oracle Clusterware home on node2:

      setup.exe –detachHome –silent ORACLE_HOME=CRS_home
      
  8. On node1, or in the case of a multiple node installation, on any node other than the one to be deleted, run the following command from CRS_home\oui\bin where node_list is a comma-delimited list of nodes that are to remain part of the Oracle Clusterware:

    setup.exe –updateNodeList ORACLE_HOME=CRS_home
    "CLUSTER_NODES={node_list}" CRS=TRUE
    
  9. On node2, stop and delete any services that are associated with this Oracle Clusterware home. In addition, delete any Registry entries and path entries that are associated with this Oracle Clusterware home. Also, delete all of the start menu items associated with this Oracle Clusterware home. Delete the central inventory and the files. For a shared Oracle home, do not delete the CRS_home.

Deleting an Oracle Clusterware Home Using Oracle Universal Installer in Silent Mode

Use the following procedure to delete an Oracle Clusterware home by using Oracle Universal Installer in silent mode:

  1. Perform steps 1 through 6 from the previous section for using Oracle Universal Installer interactively under the heading "Deleting an Oracle Clusterware Home Using Oracle Universal Installer in Interactive Mode" to delete nodes at the Oracle Clusterware layer.

  2. On node2, using Oracle Universal Installer setup.exe from CRS_home\oui\bin, deinstall the Oracle Clusterware home as follows:

    setup.exe –silent  -deinstall "REMOVE_HOMES={CRS_home}"
    

    If you have a shared Oracle home, run the following command from CRS_home\oui\bin to detach the Oracle Clusterware home on node2:

    setup.exe –detachHome –silent ORACLE_HOME=CRS_home
    
  3. Perform 8 and 9 from the previous procedure under "Deleting an Oracle Clusterware Home Using Oracle Universal Installer in Interactive Mode" to update the node list.



Footnote Legend

Footnote 1: Run the rootaddNode.sh script from the CRS_home/install/ directory on the node from which you are running Oracle Universal Installer.
Footnote 2: Run the root.sh script on the new node from the Oracle Clusterware home to start Oracle Clusterware on the new node.
Footnote 3: Run the orainstRoot.sh script on the new node if Oracle Universal Installer prompts you to do so.