3 Cloning Oracle Clusterware

This chapter describes how to clone an existing Oracle Clusterware home and use it to create a new cluster or to extend Oracle Clusterware to new nodes on the same cluster. You implement cloning through the use of scripts in silent mode.

The cloning procedures described in this chapter are applicable to Linux, UNIX, and Windows systems. Although the examples in this chapter use Linux and UNIX commands, the cloning concepts and procedures apply to all platforms. For the Windows platform, you need to adjust the examples or commands to be Windows specific.

This chapter contains the following topics:

Introduction to Cloning Oracle Clusterware

Cloning is the process of copying an existing Oracle installation to a different location and then updating the copied installation to work in the new environment. The changes made by one-off patches applied on the source Oracle home are also present after the clone operation. During cloning, you run a script that replays the actions that installed the Oracle Clusterware home.

Cloning requires that you start with a successfully installed Oracle Clusterware home that you use as the basis for implementing a script that extends the Oracle Clusterware home to either create a new cluster or to extend the Oracle Clusterware environment to more nodes in the same cluster. Manually creating the cloning script can be prone to errors, because you must prepare the script without the benefit of any interactive checks to validate your input. Despite this, the initial effort is worthwhile for scenarios where you run a single script to install tens or even hundreds of clusters. If you have only one cluster to install, then you should use the traditional automated and interactive installation methods, such as Oracle Universal Installer or the Provisioning Pack feature of Oracle Enterprise Manager.

Note:

Cloning is not a replacement for Oracle Enterprise Manager cloning that is a part of the Provisioning Pack. During Oracle Enterprise Manager cloning, the provisioning process simplifies the process by interactively asking you the details about the Oracle home (such as the location to which you want to deploy the clone, the name of the Oracle database home, a list of the nodes in the cluster, and so on).

The Provisioning Pack feature of Oracle Enterprise Manager Grid Control provides a framework that automates the provisioning of new nodes and clusters. For data centers with many clusters, the investment in creating a cloning procedure to provision new clusters and new nodes to existing clusters is worth the effort.

The following list describes some situations in which cloning is useful:

  • Cloning provides a way to prepare a Oracle Clusterware home once and deploy it to many hosts simultaneously. You can complete the installation in silent mode, as a noninteractive process. You do not need to use a graphical user interface (GUI) console, and you can perform cloning from a Secure Shell (SSH) terminal session, if required.

  • Cloning enables you to create a new installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.

  • Installing Oracle Clusterware by cloning is a quick process. For example, cloning an Oracle Clusterware home to a new cluster with more than two nodes requires a few minutes to install the Oracle software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh script).

  • Cloning provides a guaranteed method of repeating the same Oracle Clusterware installation on multiple clusters.

The cloned installation acts the same as the source installation. For example, you can remove the cloned Oracle Clusterware home using Oracle Universal Installer or patch it using OPatch. You can also use the cloned Oracle Clusterware home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most cases. However, you can also customize some aspects of the cloning process, for example, to specify custom port assignments or to preserve custom settings.

The cloning process works by copying all of the files from the source Oracle Clusterware home to the destination Oracle Clusterware home. Thus, any files used by the source instance that are located outside the source Oracle Clusterware home's directory structure are not copied to the destination location.

The size of the binary files at the source and the destination may differ because these are relinked as part of the cloning operation, and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.

Preparing the Oracle Clusterware Home for Cloning

To prepare the source Oracle Clusterware home to be cloned, you create a copy of an installed Oracle Clusterware home that you then use to perform the cloning procedure on one or more nodes.

Use the following step-by-step procedure to prepare a copy of the Oracle Clusterware home.

Step 1   Install Oracle Clusterware.

Use the detailed instructions in your platform-specific Oracle Clusterware installation guide to perform the following steps on the source node:

  1. Install the Oracle Clusterware 11g Release 1 (11.1).

  2. Install any patches that are required (for example, 11.1.0.n), if necessary.

  3. Apply one-off patches, if necessary.

Step 2   Shutdown Oracle Clusterware.

Before copying the source Oracle Clusterware home, shut down Oracle Clusterware using the crsctl stop crs command. The following example shows the command and the messages that display during the shutdown:

[root@node1 root]# crsctl stop crs
Stopping resources.
This could take several minutes.
Successfully stopped Oracle Clusterware resources
Stopping Cluster Synchronization Services.
Shutting down the Cluster Synchronization Services daemon.
Shutdown request successfully issued.

Note that you copy the Oracle Clusterware home from only one of the nodes.

Step 3   Create a copy of the Oracle Clusterware home.

To keep the installed Oracle Clusterware home as a working home, make a full copy of the source Oracle Clusterware home that you will use for cloning. Because the Oracle Clusterware home contains files that are relevant only to the source node, you can optionally remove the unnecessary files from the copy.

Note:

When creating the copy, the best practice is to include the release number in the name of the file.

Use one of the following methods to create a compressed copy of the Oracle Clusterware home.

Method 1: Create a copy of the Oracle Clusterware home and remove the unnecessary files from the copy:

  1. On the source node, create a copy of the Oracle Clusterware home. To keep the installed Oracle Clusterware home as a working home, make a full copy of the source Oracle Clusterware home and remove the unnecessary files from the copy. For example, as the root user on Linux systems, run the cp command:

    cp -prf CRS_HOME location_of_the_copy_of_CRS_HOME
    
  2. Delete unnecessary files from the copy.

    The Oracle Clusterware home contains files that are relevant only to the source node, so you can remove the unnecessary files from the copy in the log, crs/init, racg/dump, srvm/log, and cdata directories. The following example for Linux and UNIX systems shows the commands you can run to remove unnecessary files from the copy of the Oracle Clusterware home:

    [root@node1 root]# cd /opt/oracle/product/11g/crs
    [root@node1 crs]# rm –rf ./opt/oracle/product/11g/crs/log/hostname
    [root@node1 crs]# find . -name '*.ouibak' -exec rm {} \;
    [root@node1 crs]# find . -name '*.ouibak.1' -exec rm {} \;
    [root@node1 crs]# rm –rf root.sh*
    [root@node1 crs]# cd cfgtoollogs
    [root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;
    
  3. Create a compressed copy of the above copied Oracle Clusterware home using tar or gzip on Linux and UNIX systems and WinZip on Windows systems. Ensure that the tool you use preserves the permissions and file timestamps. For example:

    • On Linux and UNIX systems:

      [root@node1 root]# cd /opt/oracle/product/11g/crs/
      [root@node1 crs]# tar –zcvf /pathname/crs11101.tgz .
      

      In the example, the cd command changes the location to the Oracle Clusterware home, and the tar command creates the copy named crs11101.tgz. In the tar command, the pathname variable represents the location of the file.

    • On AIX or HPUX systems:

      tar cpf - . | compress -fv > temp_dir/crs11101.tar.Z
      
    • On Windows systems, use WinZip to create a zip file

Method 2: Create a compressed copy of the Oracle Clusterware home using the –X option:

  1. Create a file that lists the unnecessary files in the Oracle Clusterware home. For example, list the following file names (using the (*) wild card) in a file called excludeFileList:

    ./opt/oracle/product/11g/crs/log/hostname
    ./opt/oracle/product/11g/crs/root.sh*
    
  2. Use the tar command or Winzip to create a compressed copy of the Oracle Clusterware home. For example, on Linux and UNIX systems, run the following command to archive and compress the source Oracle Clusterware home:

    tar cpfX - excludeFileList . | compress -fv > temp_dir/crs11101.tar.Z
    

    Note:

    Do not use the jar utility to copy and compress the Oracle Clusterware home.
Step 4   Remove unnecessary files from the copy of the Oracle Clusterware home.

The Oracle Clusterware home contains files that are relevant only to the source node so you should remove the unnecessary files from the copy. You should exclude files in the log, crs/init, racg/dump, srvm/log, and cdata directories.

Use one of the following methods to exclude files from your backup file:

  • Make a copy of the source Oracle Clusterware home and delete the unnecessary files from the copy.

    The following example shows the commands you can run to remove unnecessary files from the copy of the Oracle Clusterware home:

    [root@node1 root]# cd /opt/oracle/product/11g/crs
    [root@node1 crs]# rm –rf ./opt/oracle/product/11g/crs/log/hostname
    [root@node1 crs]# find . -name '*.ouibak' -exec rm {} \;
    [root@node1 crs]# find . -name '*.ouibak.1' -exec rm {} \;
    [root@node1 crs]# rm –rf ./cdata/*
    [root@node1 crs]# rm –rf root.sh*
    [root@node1 crs]# cd cfgtoollogs
    [root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;
    
  • Create an excludeFileList file and then use the tar command or Winzip to create a copy of the Oracle Clusterware home. For example, on Linux, run the tar cpfX - excludFileList.txt command to create a tar file that does not excludes the unnecessary files.

Step 5   Create a copy of the source Oracle Clusterware home.

On the source node, create a copy of the Oracle Clusterware home using tar or gzip on Linux and UNIX systems and WinZip on Windows systems. Make sure that the tool that you use preserves the permissions and file timestamps.

When creating the copy, the best practice is to include the release number in the name of the file. For example, the following Linux example uses the cd command to change to the Oracle Clusterware home location, and then uses the tar command to create the copy named crs11101.tgz.

The following examples describe how to archive and compress the source Oracle Clusterware home on various platforms:

  • On Linux and UNIX systems, run the following command if you are using an excludeFileList file:

    tar cpfX - excludeFileList . | compress -fv > temp_dir/crs11101.tar.Z
    

    The following example shows the Linux and UNIX commands to create a copy when you are not using an excludeFileList file. In the tar command, the pathname variable represents the location of the file:

    [root@node1 root]# cd /opt/oracle/product/11g/crs/
    [root@node1 crs]# tar –zcvf /pathname/crs11101.tgz .
    
  • On AIX or HPUX systems:

    tar cpf - . | compress -fv > temp_dir/crs11101.tar.Z
    
  • On Windows systems, use WinZip to create a zip file

Note:

Do not use the jar utility to copy and compress the Oracle Clusterware home.

Cloning Oracle Clusterware to Create a New Cluster

This section explains how to create a new cluster by cloning a successfully installed Oracle Clusterware environment and copying it to the nodes on the destination cluster. The procedures in this section describe how to use cloning for Linux, UNIX, and Windows systems.

For example, you can use cloning to quickly duplicate a successfully installed Oracle Clusterware environment to create a new cluster. Figure 3-1 shows the end result of a cloning procedure in which the Oracle Clusterware home on Node 1 has been cloned to Node 2 and Node 3 on Cluster 2, making Cluster 2 a new two-node cluster.

Figure 3-1 Cloning to Create a New Oracle Clusterware Environment

Description of Figure 3-1 follows
Description of "Figure 3-1 Cloning to Create a New Oracle Clusterware Environment"

At a high level, the steps to create a new cluster through cloning are as follows:

  1. Prepare the new cluster nodes

  2. Deploy Oracle Clusterware on the destination nodes

  3. Run the clone.pl script on each destination node

  4. Run the orainstRoot.sh script on each node

  5. Run the CRS_home/root.sh script

  6. Run the configuration assistants and the Oracle Cluster Verify utility

Step 1   Prepare the new cluster nodes

On each destination node, perform the following preinstallation steps:

  • Specify the kernel parameters.

  • Configure block devices for Oracle Clusterware devices.

  • Ensure you have set the block device permissions correctly.

  • Use short, nondomain-qualified names for all names in the Hosts file.

  • Test whether or not the interconnect interfaces are reachable using the ping command.

  • Verify that the VIP addresses are not active at the start of the cloning process by using the ping command (the ping command of the VIP address must fail).

  • Run the Cluster Verification Utility (CVU) to verify your hardware and operating system environment.

See your platform-specific Oracle Clusterware installation guide for the complete preinstallation checklist.

Note:

Unlike traditional methods of installation, the cloning process does not validate your input during the preparation phase. (By comparison, during the traditional method of installation using Oracle Universal Installer, various checks take place during the interview phase.) Thus, if you make any mistakes during the hardware setup or in the preparation phase, then the cloned installation will fail.
Step 2   Deploy Oracle Clusterware on the destination nodes

Before you begin the cloning procedure described in this section, ensure that you have completed the prerequisite tasks to create a copy of the Oracle Clusterware home, as described in the "Preparing the Oracle Clusterware Home for Cloning" section.

On each destination node, deploy the copy of the Oracle Clusterware home by performing the following steps:

  1. If you do not have a shared Oracle Clusterware home, then restore the copy of the Oracle Clusterware home on each node in the destination cluster in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the source node. Skip this step if you have a shared Oracle Clusterware home.

    For example:

    • On Linux or UNIX systems, run commands similar to the following:

      [root@node1 root]# mkdir -p /opt/oracle/product/11g/crs
      [root@node1 root]# cd /opt/oracle/product/11g/crs
      [root@node1 crs]# tar –zxvf /pathname/crs11101.tgz
      

      In the example, the pathname variable represents the directory structure in which you want to install the Oracle Clusterware home.

    • On Windows systems, unzip the Oracle Clusterware home on the destination node in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the source node.

  2. Change the ownership of all files to oracle:oinstall group, and create a directory for the Oracle Inventory. For example, the following commands are for a Linux system:

    [root@node1 crs]# chown –R oracle:oinstall /opt/oracle/product/11g/crs
    [root@node1 crs]# mkdir -p /opt/oracle/oraInventory
    [root@node1 crs]# chown oracle:oinstall /opt/oracle/oraInventory
    

    Note:

    You can perform this step at the same time you perform steps 3 and 4 that run the clone.pl and orainstRoot.sh scripts on each cluster node.
  3. Run the preupdate.sh script from the CRS_Home/install directory on each target node as follows:

    @ preupdate.sh -crshome target_crs_oh   -crsuser user_who_runs_cloning -noshutdown
    
Step 3   Run the clone.pl script on each destination node

To set up the new Oracle Clusterware environment, the clone.pl script requires you to provide a number of setup values to the script. You can supply the variables by either supplying input when you run the clone.pl script, or by creating a file in which you can assign values to the cloning variables. The following discussions describe these options.

  • Supplying Input to the clone.pl Script On the Command Line

    If you do not have a shared Oracle Clusterware home, navigate to the $ORACLE_HOME/clone/bin directory on each destination node and run the clone.pl script, which performs the main Oracle Clusterware cloning tasks. To run the script, you must supply input to a number of variables.

    Note:

    When cloning Oracle Clusterware using the clone.pl script, you must set a value for the ORACLE_BASE variable even though specifying Oracle base is not a requirement of the Oracle Clusterware installation. You can set the ORACLE_BASE variable to any directory location (for example, you could set it to the Oracle Clusterware Home location), because the value is ignored.

    For example:

    On Linux and UNIX systems:

    perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=CRS_home ORACLE_HOME_
    NAME=CRS_HOME_NAME '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2'
     '-O"sl_tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"'
     '-O"ret_PrivIntrList=private_interconnect_list"'
    -O's_votingdisklocation=voting_disk1' -O's_OcrVdskMirror1RetVal=voting_disk2'
     -O's_VdskMirror2RetVal=voting_disk3' "-O's_ocrpartitionlocation=OCR_loc'
     -O's_ocrMirrorLocation=OCRMirror_loc'" -O'INVENTORY_LOCATION=oraInventory_loc' -O'-noConfig'
    

    On Windows systems:

    perl clone.pl ORACLE_BASE=D:\oracle ORACLE_HOME=CRS_home ORACLE_HOME_
    NAME=CRS_HOME_NAME '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2' 
    '-O"sl_tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"'
     '-O"ret_PrivIntrList=private_interconnect_list"' 
    '-O"sl_OHPartitionsAndSpace_valueFromDlg={partition and space information}"' -O'-noConfig'
    

    Refer to Table 3-1 and Table 3-2 for descriptions of the various variables in the preceding examples.

    If you have a shared Oracle Clusterware home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system. Ensure that the variables n_storageTypeOCR and n_storageTypeVDSK are set to 2 for redundant storage or 1 for nonredundant storage. In this case, you must also specify the mirror locations.

    For Windows platforms, on all other nodes, run the same command, by adding an additional argument: PERFORM_PARTITION_TASKS=FALSE.

    For example:

    perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=CRS_home 
    ORACLE_HOME_NAME=CRS_home_name '-On_storageTypeVDSK=2' 
    '-On_storageTypeOCR=2' '-O"sl_tableList={node2:node2-priv:node2-vip,
     node3:node3-priv:node3-vip}"' '-O"ret_PrivIntrList=private interconnect list"' 
    '-O"sl_OHPartitionsAndSpace_valueFromDlg={partition and space information}"' 
    -O'-noConfig' PERFORM_PARTITION_TASKS=FALSE
    

    See Also:

    The "Cloning Script Variables Reference" section for more information about setting these variables.
  • Supplying Input to the clone.pl Script in a File

    Because the clone.pl script is sensitive to the parameters being passed to it, you must be accurate in your use of brackets, single quotation marks, and double quotation marks. To make running the clone.pl script less prone to errors, you can create a file that is similar to the start.sh script shown in Example 3-1 in which you can specify environment variables and cloning parameters to the clone.pl script.

    Example 3-1 shows an excerpt from the example file example called start.sh script that calls the clone.pl script and has been set up for a cluster named crscluster. Invoke the script as the operating system user that installed Oracle Clusterware.

    Example 3-1 Excerpt From the start.sh Script to Clone Oracle Clusterware

    #!/bin/sh
    ORACLE_BASE=/opt/oracle
    CRS_home=/opt/oracle/product/11g/crs
    E01=CRS_home=/opt/oracle/product/11g/crs
    E02=ORACLE_HOME=${CRS_home}
    E03=ORACLE_HOME_NAME=OraCrs11g
    E04=ORACLE_BASE=/opt/oracle
    #C00="-O'-debug'"
    C01="-O's_clustername=crscluster'"
    C02="-O'INVENTORY_LOCATION=/opt/oracle/oraInventory'"
    C03="-O'sl_tableList={node1:node1int:node1vip:N:Y,node2:node2int:node2vip:N:Y}'"
    C04="-O'ret_PrivIntrList={eth0:144.25.212.0:1,eth1:10.10.10.0:2}'"
    C05="-O'n_storageTypeVDSK=1'"
    C06="-O's_votingdisklocation=/dev/sdc1' -O's_OcrVdskMirror1RetVal=/dev/sdd1'
             -O's_VdskMirror2RetVal=/dev/sde1'"
    C07="-O'n_storageTypeOCR=1'"
    C08="-O's_ocrpartitionlocation=/dev/sdc2' -O's_ocrMirrorLocation=/dev/sdd2'"
    
    perl CRS_home/clone/bin/clone.pl $E01 $E02 $E03 $E04 $C01 $C02 $C03 $C04 $C05 $C06 
    $C07 $C08
    

    The start.sh script sets several environment variables and cloning parameters, as described in Table 3-1 and Table 3-2, respectively.

    Table 3-1 describes the environment variables E01, E02, E03, and E04 that are shown in bold typeface in Example 3-1.

    Table 3-1 Environment Variables Passed to the clone.pl Script

    Symbol Variable Description

    E01

    CRS_home

    The location of the Oracle Clusterware home. This directory location must exist and must be owned by the Oracle operating system group: oinstall.

    E02

    ORACLE_HOME

    The location of the Oracle Clusterware home. This directory location must exist and must be owned by the Oracle operating system group: oinstall.

    E03

    ORACLE_HOME_NAME

    The name of the Oracle Clusterware home. This is stored in the Oracle Inventory.

    E04

    ORACLE_BASE

    The location of the Oracle base directory.


    Also, see "Cloning Script Variables Reference" for a description of the cloning parameters C01 through C08, that are shown in bold typeface in Example 3-1.

Step 4   Run the orainstRoot.sh script on each node

In the Central Inventory directory on each destination node, run the orainstRoot.sh script as the operating system user that installed Oracle Clusterware. This script populates the /etc/oraInst.loc directory with the location of the central inventory.

Note that you can run the script on each node simultaneously. For example:

[root@node1 root]# /opt/oracle/oraInventory/orainstRoot.sh

Ensure the orainstRoot.sh script has completed on each destination node before proceeding to the next step.

Step 5   Run the CRS_home/root.sh script

On each destination node, run the CRS_home/root.sh script. You can run the script on only one node at a time. The following example is for a Linux or UNIX system:

  1. On the first node, run the following command:

    [root@node1 root]# /opt/oracle/product/11g/crs/root.sh
    

    Ensure the CRS_home/root.sh script has completed on the first node before running it on the second node.

  2. On each subsequent node, run the following command:

    [root@node2 root]# /opt/oracle/product/11g/crs/root.sh
    

The root.sh script automatically sets up the node applications: Global Services Daemon (GSD), Oracle Notification Services (ONS), and Virtual IP (VIP) resources in the OCR.

Step 6   Run the configuration assistants and the Oracle Cluster Verify utility

At the end of the Oracle Clusterware installation on each new node, run the configuration assistants and CVU using the commands in the CRS_home/cfgtoollogs/configToolAllCommands file.

Cloning to Extend Oracle Clusterware to More Nodes in the Same Cluster

For example, you can use cloning to quickly extend a successfully installed Oracle Clusterware environment to more nodes in the same cluster. Figure 3-2 shows the end result of a cloning procedure in which the Oracle Clusterware home on Node 1 has been cloned to Node 2 in the same cluster, making it a two-node cluster.

Figure 3-2 Cloning to Extend the Oracle Clusterware Environment to Another Node

Description of Figure 3-2 follows
Description of "Figure 3-2 Cloning to Extend the Oracle Clusterware Environment to Another Node"

At a high level, the steps to extend Oracle Clusterware to more nodes are nearly identical to the steps described in the "Cloning Oracle Clusterware to Create a New Cluster" section.

The following list describes the steps you perform to extend Oracle Clusterware to additional nodes in the cluster:

  1. Prepare the new cluster nodes.

  2. Deploy Oracle Clusterware on the destination nodes.

  3. Run the clone.pl script on each destination node. The following example is for Linux or UNIX systems:

    perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_
    NAME=Oracle_home_name "sl_tableList={node2:node2_priv:node2-vip}" 
    INVENTORY_LOCATION=central_inventory_location -noConfig
    

    The following example is for Windows systems:

    perl clone.pl ORACLE_BASE=D:\oracle ORACLE_HOME=CRS_HOME ORACLE_HOME_name=CRS_HOME_name
    '-O"sl_tableList={node2:node2-priv:node2-vip}"' '-O-noConfig'
     '-OPERFORM_PARTITION_TASKS=FALSE'
    
  4. Run the orainstRoot.sh script on each node on each destination node.

  5. Run the addNode script on the source node.

    Run the following command on the source node, where new_node is the name of the new node, new_node-priv is the private interconnect protocol address for the new node, and new_node-vip is the virtual interconnect protocol address for the new node:

    $ORACLE_HOME/oui/bin/addNode.sh –silent "CLUSTER_NEW_NODES={new_nodes}"
     "CLUSTER_NEW_PRIVATE_NODE_NAMES={new_node-priv}" "CLUSTER_NEW_VIRTUAL_
    HOSTNAMES={new_node-vip}" –noCopy 
    

    Note:

    Because the clone.pl script has already been run on the new node, this step only updates the inventories on the nodes and instantiates scripts on the local node
  6. On the source node, run a script to instantiate the node:

    • On Linux and UNIX systems, run the rootaddnode.sh script from the CRS_HOME/install directory as root user.

    • On Windows systems, run the crssetup.add.bat script from the %CRS_HOME%\install directory.

  7. Run the CRS_home/root.sh script on each destination node in Linux and UNIX environments.

  8. Run the configuration assistants and the CLUVFY utility.

    As the user that owns the clusterware on the source node of the cluster, run the configuration assistants as described in the following steps. You can obtain the remote port from the $CRS_HOME/cfgtoollogs/configToolAllCommands file on the target node.

    1. On Linux or UNIX systems, run the following onsconfig command:

      onsconfig add_config node2:remote_port node3:remote_port
      

      On Windows systems, run the onsconfig command:

      ./onsconfig add_config node2:remote_port node3:remote_port
      
    2. On Linux, UNIX, or Windows systems, run the CLUVFY utility in postinstallation verification mode to confirm that the installation of Oracle Clusterware was successful. For example:

      CRS_HOME/bin/cluvfy stage -post crsinst -n node1,node2
      

Cloning Script Variables Reference

Table 3-2 describes the variables that can be passed to the clone.pl script when you include the -O option on the command.

Table 3-2 Variables for the clone.pl Script with the -O option

Variable Datatype Description

s_clustername

String

Set the value for this variable to be the unique name of the cluster that you are creating from a cloning operation. Use a maximum of 15 characters. Valid characters for the cluster name can be any combination of lower and uppercase alphabetic characters A to Z, numerics 0 through 9 , hyphens (-), pound signs (#) and underscores (_).

INVENTORY_LOCATION

String

The location of the inventory. This directory location must exist and must be owned by the Oracle operating system group: oinstall.

sl_tableList

String List

A list of the nodes that make up the cluster. The format is a comma-delimited list of public_name:private_name:vip_name:N:Y.

Set the value of this variable to be equal to the information in the cluster configuration information table. This file contains a comma-delimited list of values. The first field designates the public node name, the second field designates the private node name, and the third field designates the virtual host name. The fourth and fifth fields are used only by Oracle Universal Installer and should default to N:Y. Oracle Universal Installer parses these values and assign s_publicname and s_privatename variables accordingly. For example:

{"node1:node1-priv:node1-vip:N:Y","node2:node2-priv:node2-vip:N:Y"}.

ret_PrivIntrList

String List

This is the return value from the Private Interconnect Enforcement table. This variable has values in the format {Interface Name, Subnet, Interface Type}. The value for Interface Type can be one of the following:

  • 1 to denote public,

  • 2 to denote private

  • 3 to denote Do Not Use

For example:

{"eth0:10.87.24.0:2","eth1:140.87.24.0:1","eth3:140.74.30.0:3"}

You can run the ipconfig command to identify the initial values from which you can determine the entries for ret_PrivIntrList.

n_storageTypeVDSK

Integer

If you are using:

  • A single voting disk, set this parameter to 2 (not redundant).

  • Multiple voting disks, set this parameter to 1 (redundant).

n_storageTypeOCR

Integer

If you are using:

  • A single OCR disk, set this parameter to 2 (not redundant).

  • Multiple OCR disks, set this parameter to 1 (redundant).

s_clustername

String

This variable contains user-entered cluster name information; allow a maximum of 15 characters.

VdskMirrorNotReqd

String

This variable is not required in the Oracle Cluster Registry (OCR) dialog.

CLUSTER_CONFIGURATION_FILE

String

This variable is used to pass the cluster configuration file information which is the same file as that specified during installation. You may use this file instead of sl_tablelist. This file contains the public node name, private node name, and virtual host name which is white space-delimited information for the nodes of the cluster. For example,

node1    node1-priv    node1-vip
node2    node2-priv    node2-vip

Note that if you are cloning from an existing installation, then you should use sl_tableList. Do not specify this variable for a clone installation.

s_votingdisklocation

String

Set the value of this variable to be the location of the voting disk. For example:

/oradbshare/oradata/vdisk

If you are using:

  • A single voting disk, only specify the voting disk location with the s_votingdisklocation parameter.

  • Multiple voting disks, set the s_votingdisklocation, s_OcrVdskMirror1RetVal, and the s_VdskMirror2RetVal parameters.

s_OcrVdskMirror1RetVal

String

Set the value of this variable to be the location of the first additional voting disk. You must set this variable if you choose a value of 1 for the n_storageTypeVDSK variable or Not Redundant. For example:

/oradbshare/oradata/vdiskmirror1

s_ocrpartitionlocation

String

Set the value of this variable to the OCR location. Oracle Database places this value in the ocr.loc file when you run the root.sh script. For example:

/oradbshare/oradata/ocr

If you are using:

  • A single OCR disk, only set the s_ocrpartitionlocation parameter to specify the location of the OCR partition.

  • Multiple OCR disks, set the s_ocrpartitionlocation parameter and the s_ocrMirrorLocation parameter.

s_ocrMirrorLocation

String

Set the value of this variable to the value for the OCR mirror location. Oracle Database places this value in the ocr.loc file when you run the root.sh script. You must set this variable if you choose a value of 1 for the n_storageTypeOCR variable or Not Redundant. For example:

/oradbshare/oradata/ocrmirror

s_VdskMirror2RetVal

String

Set the value of this variable to be the location of the second additional voting disk. You must set this variable if you choose a value of 1 for the n_storageTypeVDSK variable or Not Redundant.

/oradbshare/oradata/vdiskmirror2

CLUSTER_NODES

String List

The value of this variable represents the cluster node names that you selected for installation. For example, if you selected node1:

CLUSTER_NODES = {"node1"}

b_Response

Boolean

Only set this variable when performing a silent installation with a response file. The valid values are true or false.

sl_OHPartitionsAndSpace_valueFromDlg

String List

Set the value for this variable using the following format:

1 = disk number

2 = partition number

3 = partition size

4 = format type, 0 for raw and 1 for cluster file system

5 = Drive Letter (this value is not applicable if you use raw devices, use the available drive letter if you are using a cluster file system.

6 Usage type values:

  • 0 = Data or software use only

  • 1 = Primary OCR only

  • 2 = Voting disk only

  • 3 = Primary OCR and voting disk on the same cluster file system partition

  • 4 = OCR mirror only

  • 5 = OCR mirror and voting disk on the same cluster file system partition

For example, to configure the OCR and voting disk on raw devices and to not use a cluster file system for either data or software, set sl_OHPartitionsAndSpace_valueFromDlg to list only the partitions that you intend to use for an Oracle Clusterware installation using the following format:

sl_OhPartitionsAndSpace_valueFromDlg =
 {Disk,Partition,partition size, 0,N/A,1,Disk,Partition,
 partition size,0,N/A,2,.....)

Locating and Viewing Log Files Generated During Cloning

The cloning script runs multiple tools, each of which may generate its own log files. After the clone.pl script finishes running, you can view log files to obtain more information about the cloning process.

The following log files that are generated during cloning are the key log files of interest for diagnostic purposes:

  • Central_Inventory/logs/cloneActions timestamp.log

    Contains a detailed log of the actions that occur during the Oracle Universal Installer part of the cloning.

  • Central_Inventory/logs/oraInstall timestamp.err

    Contains information about errors that occur when Oracle Universal Installer is running.

  • Central_Inventory/logs/oraInstall timestamp.out

    Contains other miscellaneous messages generated by Oracle Universal Installer.

  • $ORACLE_HOME/clone/logs/clone timestamp.log

    Contains a detailed log of the actions that occur prior to cloning as well as during the cloning operations.

  • $ORACLE_HOME/clone/logs/error timestamp.log

    Contains information about errors that occur prior to cloning as well as during cloning operations.

Table 3-3 describes how to find the location of the Oracle inventory directory.

Table 3-3 Finding the Location of the Oracle Inventory Directory

Type of System Location of the Oracle Inventory Directory

All UNIX computers except Linux and IBM AIX

/var/opt/oracle/oraInst.loc

IBM AIX and Linux

/etc/oraInst.loc file.

Windows

Obtain the location from the Windows Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\INST_LOC