2 Preparing Your Cluster

This chapter contains the information that your system administrator and network administrator need to help you configure the two nodes in your cluster. This chapter assumes a basic understanding of the Red Hat Linux operating system. In some cases, you may need to refer to details in Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Linux. In addition, you must have root privileges to perform the tasks in this chapter.

This chapter includes the following sections:

Checking Requirements

Before you begin your installation, you should check to make sure that your system meets the requirements for Oracle Real Application Clusters (Oracle RAC). The requirements can be grouped into the following three categories:

Checking the Hardware Requirements

Each node that you want to make part of your Oracle Clusterware, or Oracle Clusterware and Oracle RAC installation, must satisfy the minimum hardware requirements of the software. These hardware requirements can be categorized as follows:

  • Physical memory (at least 1 gigabyte (GB) of RAM)

  • Swap space (at least 2 GB of available swap space)

  • Temporary space (at least 400 megabytes (MB))

  • Processor type (CPU) that is certified with the version of the Oracle software being installed

Note:

When you install the Oracle Database software, Oracle Universal Installer (OUI) automatically performs hardware prerequisite checks and notifies you if they are not met.

You will need at least 1.5 GB of available disk space for the Oracle Database home directory and 1.5 GB of available disk space for the Oracle Automatic Storage Management (ASM) home directory. You will also need 120 MB of disk available space for the Oracle Clusterware software installation. For best performance and protection, you should have multiple disks, each using a different disk controller.

An Oracle RAC database is a shared everything database. All datafiles, control files, redo log files, and the server parameter file (SPFILE) in Oracle RAC environments must reside on shared storage that is accessible by all the instances in the cluster database. The Oracle RAC installation that is described in this guide uses Oracle ASM for the shared storage of the database files.

Oracle Clusterware achieves superior scalability and high availability by using the following components:

  • Voting disk–Manages cluster membership and arbitrates cluster ownership between the nodes in case of network failures. The voting disk is a file that resides on shared storage. For high availability, Oracle recommends that you have more than one voting disk, and that you have an odd number of voting disks. If you define a single voting disk, then use mirroring at the file system level for redundancy.

  • Oracle Cluster Registry (OCR)–Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR contains information such as which database instances run on which nodes and which services run on which databases. The OCR also stores information about processes that Oracle Clusterware controls. The OCR resides on shared storage that is accessible by all the nodes in your cluster. Oracle Clusterware can multiplex, or maintain multiple copies of, the OCR and Oracle recommends that you use this feature to ensure high availability.

Note:

Both the voting disks and the OCR must reside on shared devices that you configure before you install Oracle Clusterware and Oracle RAC.

These Oracle Clusterware components require the following additional disk space:

  • Two Oracle Clusterware Registry files, 256 MB each, or 512 MB total disk space

  • Three voting disk files, 256 MB each, or 768 MB total disk space

For voting disk file placement, ensure that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. See "Configuring the Raw Storage Devices and Partitions" for more information about configuring Oracle Clusterware files.

See Also:

Oracle Clusterware and Oracle Real Application Clusters Installation and Configuration Guide for your platform for information about exact requirements

Identifying Network Requirements

An Oracle RAC cluster comprises two or more nodes that are linked by a private interconnect. The interconnect serves as the communication path between nodes in the cluster. Each cluster database instance uses the interconnect for messaging to synchronize each instance's use of the shared resources. Oracle RAC also uses the interconnect to transmit data blocks that are shared between the instances.

Oracle Clusterware requires that you connect the nodes in the cluster to a private network by way of a private interconnect. The private interconnect is a separate network that you configure between cluster nodes. The interconnect used by Oracle RAC is the same interconnect that Oracle Clusterware uses. This interconnect should be a private interconnect, meaning it is not be accessible by nodes that are not members of the cluster.

When you configure the network for Oracle RAC and Oracle Clusterware, each node in the cluster must meet the following requirements:

  • Each node needs at least two network interface cards, or network adapters. One adapter is for the public network and the other adapter is for the private network used by the interconnect. You should install additional network adapters on a node if that node:

    • Does not have at least two network adapters

    • Has two network interface cards but is using network attached storage (NAS). You should have a separate network adapter for NAS.

    Note:

    For the most current information about supported network protocols and hardware for Oracle RAC installations, refer to the Certify pages on OracleMetaLink, which is located at
    http://metalink.oracle.com
    
  • You must have at least three IP addresses available for each node:

    1. An IP address with an associated host name (or network name) for the public interface.

    2. A private IP address with a host name for each private interface.

      Note:

      Oracle recommends that you use private network IP addresses for the private interfaces (for example: 10.*.*.* or 192.168.*.*).
    3. One virtual IP address with an associated network name. Select a virtual IP (VIP) address that meets the following requirements:

      • The VIP address and associated network name are currently unused.

      • The VIP is on the same subnet as your public interface.

  • Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes.You should configure the same private interface names for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node.

  • For the private network, the end points of all designated interconnect interfaces must be completely reachable on the network. There should be no node that is not accessible by other nodes in the cluster using the private network.

To determine what interfaces are configured on a node running Red Hat Linux, use the following command as the root user:

# /sbin/ifconfig

You may need to work with your system or network administrator to obtain IP addresses for each node. See "Configuring the Network" for more information about configuring the IP addresses and interface names.

Verifying the Installed Operating System and Software Requirements

Refer to Oracle Clusterware and Oracle Real Application Clusters Installation and Configuration Guide for your platform for information about exact requirements. These requirements can include any of the following:

  • The operating system version

  • The kernel version of the operating system

  • Installed packages, patches, or patch sets

  • Installed compilers and drivers

  • Web browser type and version

  • Additional application software requirements

If you are currently running an operating system version that is not supported by Oracle Database 10g Release 2 (10.2), then you must first upgrade your operating system before installing Oracle Real Application Clusters 10g.

To determine if the operating system requirements for Red Hat Linux have been met:

  1. To determine which distribution and version of Linux is installed, run the following command as the root user:

    cat /etc/issue
    
  2. Like most software, the Linux kernel is updated to fix bugs in the operating system. These kernel updates are referred to as erratum kernels or errata levels. To determine if the required errata level is installed, use the following procedure as the root user:

    uname -r
    2.4.21-27.EL
    

    The output in the previous example shows that the kernel version is 2.4.21, and the errata level (EL) is 27. Review the required errata level for your distribution. If the errata level is below the required minimum errata level, then install the latest kernel update for your operating system. The kernel updates are available from your operating system vendor.

  3. To ensure there are no operating system issues effecting installation, make sure you have installed all the operating system patch updates and packages that are listed in Oracle Clusterware and Oracle Real Application Clusters Installation Guide for your platform. If you are using RedHat Linux, you can determine if the required packages, or programs that perform specific functions or calculations, are installed by using the following command as the root user:

    rpm -q package_name
    

    The variable package_name is the name of the package you are verifying, such as setarch. If a package is not installed, then install it from your Linux distribution media or download the required package version from your Linux vendor's Web site.

Preparing the Server

In this section, you will perform the following tasks:

Configuring Operating System Users and Groups

Depending on whether or not this is the first time Oracle software is being installed on this system, you may need to create operating system groups.

The following operating system groups are required if you are installing Oracle RAC:

  • The OSDBA group (typically, dba)

  • The Oracle Inventory group (typically, oinstall)

The following operating system users are required for all installations:

  • A user that owns the Oracle software (typically, oracle)

  • An unprivileged user (for example, the nobody user on Linux systems)

A single Oracle Inventory group is required for all installations of Oracle software on the system. After the first installation of Oracle software, you must use the same Oracle Inventory group for all subsequent Oracle software installations on that system. However, you can choose to create different Oracle software owner users and OSDBA groups (other than oracle and dba) for separate installations. By using different groups for different installations, members of these different groups have DBA privileges only on the associated databases, rather than on all databases on the system.

Note:

If installing Oracle RAC on Microsoft Windows, Oracle Universal Installer automatically creates the ORA_DBA group. Also, if you install the Oracle RAC software while logged in to an account with administrative privileges, you do not need to create a separate user for the installation.

If you use a domain account when installing Oracle RAC on Microsoft Windows, then the domain user must be explicitly granted local administrative privileges on each node in the cluster. It is not sufficient if the domain user has inherited privileges from membership in a group. Also, sure the domain user is a member of the ORA_DBA group on each node after you have completed the installation.

To create the required operating system user and groups on Red Hat Linux:

  1. If this is the first time Oracle software has been installed on your server, and the Oracle Inventory group does not exist, then create the Oracle Inventory group by entering a command as the root user that is similar to the following:

    /usr/sbin/groupadd oinstall
    
  2. Create an OSDBA group by entering a command as the root user that is similar to the following:

    /usr/sbin/groupadd dba
    
  3. If the user that owns the Oracle software does not exist on your server, you must create the user. Select a user ID (UID) that is currently not in use on all the nodes in your cluster. The following command shows how to create the oracle user and the user's home directory (/home/oracle) with the default group as oinstall and the secondary group as dba, using a UID of 504:

    useradd -u 504 –g oinstall -G dba -d /home/oracle -r oracle
    
  4. Set the password for the oracle account using the following command. Replace pwd with your own password.

    passwd oracle
    
    Changing password for user oracle.
    New UNIX password: pwd 
    retype new UNIX password: pwd
    passwd:  all authentication tokens updated successfully.
    
  5. Repeat steps 1 through 4 on each node in your cluster as needed.

  6. Verify that the attributes of the user oracle are identical on both docrac1 and docrac2

    id oracle 
    

    The command output should be similar to the following:

    uid=504(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
    

Configuring the Secure Shell

When installing Oracle RAC on UNIX and Linux platforms, the software is installed on one node, and OUI uses secure communication to copy the software binary files to the other cluster nodes. OUI uses the Secure Shell (SSH) for the communication. Various other components of Oracle RAC and Oracle Clusterware also use SSH for secure communication.

Note:

Oracle Net Configuration Assistant (NETCA) and Oracle Database Configuration Assistant (DBCA) require scp and ssh to be located in the path /usr/local/bin on the Red Hat Linux platform. If scp and ssh are not in this location, then create a symbolic link in /usr/local/bin to the location where scp and ssh are found.

To configure SSH, you must first create Rivest-Shamir-Adleman (RSA) keys and Digital Signature Algorithm (DSA) keys on each cluster node. After you have created the private and public keys, you copy the keys from all cluster node members into an authorized keys file that is identical on each node. When this is done, you then start the SSH agent to load the keys into memory.

See Also:

Oracle Database Advanced Security Administrator's Guide for more information about data security using encryption keys

Generating RSA and DSA Keys

Create the RSA and DSA keys on each cluster node as the first step in configuring SSH.

To configure the RSA and DSA keys on Red Hat Linux, perform the following tasks:

  1. Log out and then log back in to the operating system as the oracle user on docrac1.

    Note:

    Do not use the su command to switch from the root user to the oracle user for these steps. You must completely exit your operating system session as the root user and start a new session as oracle for these steps to succeed.
  2. Determine if a .ssh directory exists in the oracle user's home directory. If not, create the .ssh directory and set the directory permission so that only the oracle user has access to the directory, as shown here:

    $ ls -a $HOME
    $ mkdir ~/.ssh
    $ chmod 700 ~/.ssh
    
  3. Create the RSA-type public and private encryption keys. Open a terminal window and run the following command:

    /usr/bin/ssh-keygen -t rsa
    

    At the prompts:

    • Accept the default location for the key file by pressing the Enter key.

    • When prompted for a pass phrase, enter and confirm a pass phrase that is different from the oracle user's password.

    This command creates the public key in the /home/oracle/.ssh/id_rsa.pub file and the private key in the /home/oracle/.ssh/id_rsa file.

    WARNING:

    To protect the security of your system, never distribute the private key to anyone.

  4. Create the DSA type public and private keys on both docrac1 and docrac2. In the terminal window for each node, run the following command:

    /usr/bin/ssh-keygen -t dsa
    

    At the prompts:

    • Accept the default location for the key file by pressing the Enter key.

    • When prompted for a pass phrase, enter and confirm a pass phrase that is different from the oracle user's password.

    This command creates the public key in the /home/oracle/.ssh/id_dsa.pub file and the private key in the /home/oracle/.ssh/id_dsa file.

    WARNING:

    To protect the security of your system, never distribute the private key to anyone.

  5. Repeat steps 1 through 4 on each node that you intend to add to the cluster.

Adding the Keys to an Authorized Key File

After you have generated the keys, you copy the keys for each node to an authorized_keys file and copy this file to all nodes in the cluster.

To add the generated keys to an authorized keys files:

  1. On the local node, change directories to the .ssh directory in the Oracle user home directory.

    cd ~/.ssh
    
  2. Add the RSA and DSA keys to the authorized_keys files using the following commands, then list the contents of the .ssh directory:

    $ cat id_rsa.pub >>authorized_keys
    $ cat id_dsa.pub >>authorized_keys
    $ ls
    

    You should see the id_dsa.pub and id_rsa.pub keys that you generated, the id_dsa and id_rsa private key files, as well as the authorized_keys file.

  3. Use Secure Copy (SCP) or Secure FTP (SFTP) to copy the authorized_keys file to the oracle user .ssh directory on a remote node. The following example uses SCP to copy the authorized_keys file to docrac2, and the oracle user path is /home/oracle:

    [oracle@docrac1 .ssh]scp authorized_keys docrac2:/home/oracle/.ssh/
    The authenticity of host 'docrac2(143.46.43.101) can't be established.RSA key fingerprint is 7z:ez:e7:f6:f4:f2:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e.
    Are you sure you want to continue connecting (yes/no)? yes
    oracle@docrac2's password:
    

    You are prompted to accept an RSA or DSA key. Enter yes, and you see that the node you are copying to is added to the known_hosts file as shown in the preceding sample output.

    When prompted, provide the password for the oracle user, which should be the same on all the nodes in the cluster (Note: this is the user password, not the newly specified passphrase). The authorized_keys file is then copied to the remote node.

  4. Using SSH, log in to the node where you copied the authorized_keys file, using the passphrase you created. Then change to the .ssh directory, and using the cat command, add the RSA and DSA keys for the second node to authorized_keys file, as demonstrated here:

    [oracle@docrac1 .ssh]$ ssh docrac2
    Enter passphrase for key '/home/oracle/.ssh/id_rsa':
    [oracle@docrac2 oracle]S cd .ssh
    [oracle@docrac2 ssh]$ cat id_rsa.pub  >> authorized_keys
    [oracle@docrac2 ssh]$ cat id_dsa.pub  >> authorized_keys
    
  5. If you have more than two nodes in your cluster, repeat step 3 and step 4 for each node you intend to add to your cluster. Copy the most recently updated authorized_keys file to the next node, then add the public keys for that node to the authorized_keys file.

  6. When you have updated the authorized_keys file on all nodes, use SCP to copy the complete authorized_keys file from the last node to be updated to all the other cluster nodes, overwriting the existing version on the other nodes. For example:

    [oracle@docrac2 .ssh]scp authorized_keys docrac1:/home/oracle/.ssh/
    The authenticity of host 'docrac1(143.46.43.100) can't be established.RSA key fingerprint is 7e:62:60:f6:f4:f2:d1:a6:f7:4e:zz:me:b9:48:dc:e3:9c.
    Are you sure you want to continue connecting (yes/no)? yes
    oracle@docrac2's password:
    Warning: Permanently added 'docrac1,143.46.43.100' (RSA) to the list of known
    hosts.
    oracle@docrac1's password:
    authorized_keys                          100%  1656    19.9MB.s    00:00
    

At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass phrase that you specified when you created the RSA and DSA keys.

Configuring SSH User Equivalency

User equivalency exists in a cluster when the following occurs on all nodes in the cluster:

  • A given user has the same user name, user ID (UID), and password

  • A given user belongs to the same groups

  • A given group has the same group ID (GID)

On Linux systems, to enable Oracle Universal Installer to use the ssh and scp commands without being prompted for a pass phrase, you must configure user SSH equivalency.

To configure user SSH equivalency on Red Hat Linux:

  1. On the system where you want to run Oracle Universal Installer, log in as the oracle user.

  2. Start the SSH agent and load the SSH keys into memory using the following commands:

    $ exec /usr/bin/ssh-agent $SHELL
    $ /usr/bin/ssh-add
    

    At the prompt, enter the pass phrase for each key that you generated when configuring SSH. For example:

    [oracle@docrac1 .ssh]$ exec /usr/bin/ssh-agent $SHELL
    [oracle@docrac1 .ssh]$ /usr/bin/ssh-add
    Enter passphrase for /home/oracle/.ssh/id_rsa
    Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
    Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
    

    These commands start the ssh-agent on the node, and load the RSA and DSA keys into memory so that you are not prompted to use pass phrases when issuing SSH commandsIf you have configured SSH correctly, then you can now use the ssh or scp commands without being prompted for a password or a pass phrase.

    Note:

    Do not close this terminal window until you have completed the installation. If you must close this terminal window before the installation is complete, repeat step 2 before starting the installation.
  3. Complete the SSH configuration by using the ssh command to retrieve the date on each node in the cluster.

    For example, in a two-node cluster, with nodes named docrac1 and docrac2, you would enter the following commands:

    $ ssh docrac1 date
    $ ssh docrac2 date
    

    The first time you use SSH to connect to one node from another node, you see a message similar to the following:

    The authenticity of host 'docrac1(143.46.43.100) can't be established. RSA key fingerprint is 7z:ez:e7:f6:f4:f2:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e. Are you sure you want to continue connecting (yes/no)? yes
    

    Enter yes at the prompt to continue. You should not see this message again when you connect to this node to the other node. If you see any other messages or text, apart from the date, then the installation can fail.

    If any node prompts for a password or pass phrase, then verify that the ~/.ssh/authorized_keys file on that node contains the correct public keys. Make any changes required to ensure that only the date is displayed when you enter these commands. You should also ensure that any parts of login scripts that generate output or ask any questions are modified so that they act only when the shell is an interactive shell.

    At the end of this step, each public hostname for each member node should be registered in the known_hosts file for all other cluster member nodes.

Configuring the Operating System Environment

On Red Hat Linux, you run Oracle Universal Installer from the oracle account. Oracle Universal Installer obtains information from the environment variables configured for the oracle user. Prior to running OUI, you should modify the oracle user environment variables to configure the following:

  • Set the default file mode creation mask (umask) to 022 in the shell startup file on Linux and UNIX systems.

  • Set the ORACLE_BASE environment variable to the location in which you plan to install the Oracle Database software. Refer to "Choosing an Oracle Base Directory" for more information about the ORACLE_BASE directory.

Also, if the /tmp directory has less than 400 MB of available disk space, but you have identified a different file system that has at least 400 MB of available space, you can set the TEMP and TMPDIR environment variables to specify the alternate temporary directory on this file system.

Prior to installing Oracle Clusterware, you can set the ORACLE_HOME variable to the location of the Oracle Clusterware home directory. However, you also specify the directory in which the software should be installed as part of the installation process. After Oracle Clusterware has been installed, the ORACLE_HOME environment variable will be modified to reflect the value of the Oracle Database home directory.

Note:

On Linux systems, if there are hidden files (such as logon or profile scripts) that contain stty commands, when these files are loaded by the remote shell during installation, OUI indicates an error and stops the installation. Remove any stty commands from such files before you start the installation.

Configuring the Network

Oracle Clusterware requires that you connect the nodes in the cluster to a private network by way of a private interconnect. Each node in the cluster must also be accessible by way of the public network.

To configure the network and ensure that each node in the cluster is able to communicate with the other nodes in the cluster:

  1. Determine your cluster name. The cluster name should satisfy the following conditions:

    • The cluster name is globally unique throughout your host domain.

    • The cluster name is at least 1 character long and less than 15 characters long.

    • The cluster name consists of the same character set used for host names: underscores (_), hyphens (-), and single-byte alphanumeric characters (a to z, A to Z, and 0 to 9).

    • If you use third-party vendor clusterware, then Oracle recommends that you use the vendor cluster name.

  2. Determine the public node names, private node names, and virtual node names for each node in the cluster.

    • For the public node name, use the primary host name of each node. In other words, use the name displayed by the hostname command. This node name can be either the permanent or the virtual host name, for example: docrac1.

    • Determine a private node name or private IP address for each node. The private IP address is an address that is accessible only by the other nodes in this cluster. Oracle Database uses private IP addresses for internode, or instance-to-instance Cache Fusion traffic. Oracle recommends that you provide a name in the format public_hostname-priv, for example: docrac1-priv.

    • Determine a virtual host name for each node. A virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle Database uses virtual IP addresses for client-to-database connections, so the VIP address must be publicly accessible. Oracle recommends that you provide a name in the format public_hostname-vip, for example: docrac1-vip.

  3. Identify the interface names and associated IP addresses for all network adapters by running the following command on each node:

    # /sbin/ifconfig
    

    From the output, identify the interface name (such as eth0) and IP address for each network adapter that you want to specify as a public or private network interface.

    Note:

    When you install Oracle Clusterware and Oracle RAC, you will require this information.
  4. On each node in the cluster, assign a public IP address with an associated network name to one network adapter, and a private IP address with an associated network name to the other network adapter.

    The public name for each node should be registered with your domain name system (DNS). If you do not have an available DNS, then record the network name and IP address in the system hosts file, /etc/hosts. Use the /etc/hosts file on each node to associate the private network name for that host with its private IP address.

    You can test whether or not an interconnect interface is reachable using a ping command.

  5. On each node in the cluster, configure a third IP address that will serve as a virtual IP address. Use an IP address that meets the following requirements:

    • The virtual IP address and the network name must not be currently in use.

    • The virtual IP address must be on the same subnet as your public IP address.

    The virtual host name for each node should be registered with your DNS. If you do not have an available DNS, then record the virtual host name and IP address in the system hosts file, /etc/hosts.

  6. When you complete the network configuration, the IP address and network interface configuration should be similar to what is shown in the following table (your node names and IP addresses might be different):

    Node Node Name Type IP Address Registered in
    docrac1 docrac1 Public 143.46.43.100 DNS (if available, else the hosts file)
    docrac1 docrac1-vip Virtual 143.46.43.104 DNS (if available, else the hosts file)
    docrac1 docrac1-priv Private 10.10.10.11 Hosts file
    docrac2 docrac2 Public 143.46.43.101 DNS (if available, else the hosts file)
    docrac2 docrac2-vip Virtual 143.46.43.105 DNS (if available, else the hosts file)
    docrac2 docrac2-priv Private 10.10.10.12 Hosts file

    After you have completed the installation process, you will configure clients to use either the virtual IP address or the network name associated with the virtual IP address.

Verifying the Network Configuration

After you have configured the network, you should perform verification tests to make sure it is configured properly. If there are problems with the network connection between nodes in the cluster, the Oracle Clusterware installation will fail.

To verify the network configuration on a two-node cluster that is running Red Hat Linux:

  1. As the root user, verify the configuration of the public and private networks. Verify that the interfaces are configured on the same network on both docrac1 and docrac2.

    In this example, eth0 is used for the public network and eth1 is used for the private network, which is used for Cache Fusion communications.

    # /sbin/ifconfig
     
    eth0      Link encap:Ethernet  HWaddr 00:0E:0C:08:67:A9  
              inet addr: 143.46.43.100   Bcast:143.46.43.255   Mask:255.255.240.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:270332689 errors:0 dropped:0 overruns:0 frame:0
              TX packets:112346591 errors:2 dropped:0 overruns:0 carrier:2
              collisions:202 txqueuelen:1000 
              RX bytes:622032739 (593.2 Mb)  TX bytes:2846589958 (2714.7 Mb)
              Base address:0x2840 Memory:fe7e0000-fe800000 
     
    eth1      Link encap:Ethernet  HWaddr 00:04:23:A6:CD:59  
              inet addr: 10.10.10.11   Bcast: 10.10.10.255   Mask:255.255.240.0   
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:21567028 errors:0 dropped:0 overruns:0 frame:0
              TX packets:15259945 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:4091201649 (3901.6 Mb)  TX bytes:377502797 (360.0 Mb)
              Base address:0x2800 Memory:fe880000-fe8a0000 
     
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:52012956 errors:0 dropped:0 overruns:0 frame:0
              TX packets:52012956 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:905082901 (863.1 Mb)  TX bytes:905082901 (863.1 Mb)
    
  2. As the root user, verify that the /etc/hosts file on the node docrac1 contains the host IP addresses, virtual IP addresses, and private network IP addresses from both nodes in the cluster, as follows:

    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1       localhost.localdomain       localhost
    143.46.43.100   docrac1.mycompany.com          docrac1
    143.46.43.104   docrac1-vip.mycompany.com      docrac1-vip
    10.10.10.11     docrac1-priv
     
    143.46.43.101   docrac2.mycompany.com          docrac2
    143.46.43.105   docrac2-vip.mycompany.com      docrac2-vip
    10.10.10.12     docrac2-priv
    

    If the /etc/hosts file is missing any of the preceding information, then edit the file to add the necessary information.

    After the /etc/hosts file is configured on docrac1, edit the /etc/hosts file on docrac2 so it contains the same information for the cluster IP addresses.

  3. As the root user, verify the network configuration by using the ping command to test the connection from docrac1 from docrac2 and the reverse. As the root user, run the following commands on each node:

    # ping -c 3 docrac1.mycompany.com
    # ping -c 3 docrac1
    # ping -c 3 docrac1-priv
     
    # ping -c 3 docrac2.mycompany.com
    # ping -c 3 docrac2
    # ping -c 3 docrac2-priv
    

    You will not be able to discover the nodes using the ping command for the virtual IPs (docrac1-vip, docrac2-vip) until after Oracle Clusterware is installed and running. If the ping commands for the public or private addresses fail, resolve the issue before you proceed.

  4. Ensure that you can access the default gateway with a ping command. To identify the default gateway, use the route command, as described in the Red Hat Linux Help utility.

Preparing the Operating System and Software

When you install the Oracle software on your server, Oracle Universal Installer expects the operating system to have specific packages and software applications installed.

This section covers the following topics:

You must ensure that you have a certified combination of the operating system and the Oracle Database software by referring to Oracle MetaLink certification, which is located at the following Web site:

http://metalink.oracle.com

You can find this by clicking Certify & Availability and then selecting 1.View Certifications by Product.

Note:

Oracle Universal Installer verifies that your system meets the listed requirements. Check the requirements before you start Oracle Universal Installer, to ensure your system will meet the requirements.

Setting the Time on Both Nodes

Before starting the installation, ensure that the date and time settings on both nodes are set as closely as possible to the same date and time. Oracle strongly recommends using the Network Time Protocol (NTP) feature of most operating systems for this purpose.

NTP is a protocol designed to synchronize the clocks of servers connected by a network. When using NTP, each server on the network runs client software to periodically make timing requests to one or more servers, referred to as reference NTP servers. The information returned by the timing request is used to adjust the server's clock.

All the nodes in your cluster should use the same reference NTP server.

Configuring Kernel Parameters

Oracle Universal Installer checks the current settings for various kernel parameters to ensure they meet the minimum requirements for deploying Oracle RAC. For production database systems, Oracle recommends that you tune the settings to optimize the performance of your particular system.

Note:

If you find parameter settings or shell limit values on your system that are greater than the values mentioned in this section, then do not modify the parameter setting.

See Also:

Oracle Clusterware and Oracle Real Application Clusters Installation and Configuration Guide for your platform for more information about tuning kernel parameters

Performing Platform-Specific Configuration Tasks

You may be required to perform special configurations steps that are specific to the operating system on which you are installing Oracle RAC, or for the components used with your cluster. The following list provides examples of operating-specific installation tasks:

  • Configure the use of Huge Pages on Red Hat Enterprise Linux AS 2.1 (Itanium), SUSE Linux Enterprise Server 9, or Red Hat Enterprise Linux 4.

  • Configure the hangcheck-timer module on Red Hat Linux 3.0, SUSE 8, Red Hat Linux 4.0 and SUSE 9 systems.

  • Set shell limits for the oracle user on Red Hat Linux systems to increase the number of files and processes available to Oracle Clusterware and Oracle RAC.

  • Start the Telnet service on Microsoft Windows.

  • Create X library symbolic links on HP-UX.

  • Configure network tuning parameters on AIX.

See Also:

Oracle Clusterware and Oracle Real Application Clusters Installation and Configuration Guide for your platform for more information about tasks that are specific to your platform

Configuring Installation Directories and Shared Storage

This section describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:

Deciding on a Shared Storage Solution

Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files. The supported types of shared storage depend upon the platform you are using, for example:

  • A supported cluster file system, such as Oracle Cluster File System (OCFS) for Microsoft Windows and Linux or General Parallel File System (GPFS) on IBM platforms

  • Network file system (NFS), which is not supported on AIX, POWER, or on IBM zSeries-based Linux

  • ASM for Oracle Database files (strongly recommended)

Note:

Oracle Clusterware files cannot be stored in ASM.

For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files.

If you do not have an NFS or cluster file system available, you can use raw devices to store the Oracle Clusterware files. A raw device is a disk drive that does not yet have a file system set up. Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device. Raw devices are commonly used for Oracle RAC because they enable the sharing of disks.

Note:

For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certify pages on OracleMetaLink

http://metalink.oracle.com

If you decide to use OCFS to store the Oracle Clusterware files, you must use the proper version of OCFS for your operating system version. OCFS v1 works with RedHat Linux 2.4 and OCFS v2 works with RedHat Linux 2.6 The examples in this guide, which are based on Red Hat Linux, use raw partitions to store the Oracle Clusterware files and Oracle ASM to store the Oracle database files. The Oracle Clusterware and Oracle Database software will be installed on disks local to each node, not on a shared file system.

The following section describes how to create the raw partitions for the Oracle Clusterware files on Red Hat Linux.

See Also:

Oracle Clusterware and Oracle Real Application Clusters Installation and Configuration Guide for your platform if you are using a cluster file system or NFS

Configuring the Raw Storage Devices and Partitions

Physical disk space needs to be allocated in partitions on the disks where you want to set up raw devices. You use an operating system command to create the raw partitions. You can create multiple partitions on a single disk.

Before you install Oracle Clusterware, you will need to configure 5 raw partitions, each 256 MB in size, for storing the Oracle Cluster Registry (OCR), a duplicate OCR file on a different disk, referred to as the OCR mirror, and three voting disks. If you plan to use raw devices for storing the database files, you will need to create additional raw partitions for each tablespace, online redo log file, control file, server parameter file (SPFILE) and password file.

To configure raw partitions for Oracle Clusterware files on Red Hat Linux:

  1. To identify the device name for the disks that you want to use, enter the following command on the first node in your cluster, for example, docrac1:

    # /sbin/fdisk -l
    

    You can create the required raw partitions either on new devices that you added or on previously partitioned devices that have unpartitioned available space. To identify devices that have unpartitioned available space, examine the start and end cylinder numbers of the existing partitions and determine whether or not the device contains unused cylinders.

  2. As the root user, configure storage for the OCR, the voting disk files, and the database files. If you are using Internet small computer system interface (iSCSI) storage, provide a mapping from a block device to a character device by adding entries in the /etc/sysconfig/rawdevices file.

    Create two raw partitions 256 MB in size for the OCR and its mirror, and three partitions 256 MB in size for the Oracle Clusterware voting disks.

    To create raw partitions on a device, as the root user, enter a command similar to the following, where devicename is the name of a raw device:

    # /sbin/fdisk devicename
    

    Use the following guidelines when creating partitions:

    • Use the p command to list the partition table of the device.

    • Use the n command to create a partition.

    • After you have created the required partitions on this device, use the w command to write the modified partition table to the device.

    • Refer to the fdisk entry in the Linux Help system for more information about creating partitions.

    The following example uses fdisk to create a 256 MB partition on the raw device, /dev/sdb, on the first node. This partition, or slice, will be used for the OCR disk. You will create another 256 MB partition on a different disk and disk controller for the OCR mirror. Each file should be on a different disk and disk controller.

    # /sbin/fdisk /dev/sdb
    Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
    Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable.
    
    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
    
    Command (m for help): p
    
    Disk /dev/sdb: 1073 MB, 107341824 bytes
    34 heads, 61 sectors/track, 1011 cylinders
    Units = cylinders of 2074 * 512 = 1061888 bytes
    
       Device boot           Start       End      Blocks     ID  System
    
    Command (m for help): n
    Command action
      e  extended
      p  primary partition (1-4)
    p
    Partition number (1-4): 1
    First cylinder (1-1011, default 1):
    Using default value 1
    Last cylinder of +size or +sizeM or +sizeK (1-1011, default 1011): +256M
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks.
    #
    
  3. Enter the following command to create a 256 MB partition on the second raw device, /dev/sdc. This partition will be used for the OCR mirror. Use the same prompts as shown in step 2. Put each voting disk file on a different disk and controller.

    fdisk /dev/sdc
    
  4. Use the fdisk command to create 256 MB partitions on the raw devices /dev/sdd, /dev/sde and /dev/sdf. These partitions will be used for the voting disk files. Put each file on a different disk and controller.

    fdisk /dev/sdd
    
    fdisk /dev/sde
    
    fdisk /dev/sdf
    

    Each time you run the command, use the same responses as in step 2.

  5. As the root user on docrac1, edit the /etc/sysconfig/rawdevices file and add the mappings for the raw devices used by Oracle Clusterware. The following example also shows the mappings for ASM:

    # raw device bindings
    # format:  <rawdev> <major> <minor>
    #          <rawdev> <blockdev>
    # example: /dev/raw/raw1 /dev/sda1
    #          /dev/raw/raw2 8 5
    #OCR Devices
    /dev/raw/raw1   /dev/sdb1       
    /dev/raw/raw2   /dev/sdc1       
    #Voting Disk Devices
    /dev/raw/raw3   /dev/sdd1       
    /dev/raw/raw4   /dev/sde1       
    /dev/raw/raw5   /dev/sdf1       
    #ASM Disk Devices
    /dev/raw/raw6   /dev/sdg
    /dev/raw/raw7   /dev/sdh
    /dev/raw/raw8   /dev/sdi
    

    You have to create at least two partitions, one for the OCR, and the other for the voting disk. In steps 2 through 4, you created two OCR files and three voting disk files to improve the availability of the Oracle RAC database. The minimum size for a voting disk file is 25 MB.

  6. As the root user, on the node docrac1, enable the raw devices so that the mappings become effective at the operating system level using the following command:

    service rawdevices start
    
  7. On the node docrac2, as the root user, for each of the disks you used in the previous steps 2, 3, and 4 run the partprobe command. For example, if you configured disks /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde and /dev/sdf in the previous commands, then you would run the following commands:

    /sbin/partprobe /dev/sdb
    /sbin/partprobe /dev/sdc
    /sbin/partprobe /dev/sdd
    /sbin/partprobe /dev/sde
    /sbin/partprobe /dev/sdf
    

    This forces the operating system on the other node in the cluster to refresh its picture of the shared disk partitions.

  8. Repeat step 5 as the root user on docrac2.

  9. As the root user, on the node docrac2, start the raw devices so they are visible at the operating system level using the following command:

    /sbin/service rawdevices start
    
  10. As the root user, on each node in the cluster, enter commands similar to the following to set the owner, group, and permissions on the newly created device files:

    chown  root:oinstall  /dev/raw/raw[1-2]     # for raw1 through raw2
    chown  oracle:oinstall /dev/raw/raw[3-5] # for raw3 through raw5
    chmod  640  /dev/raw/raw[1-2]
    chmod  640  /dev/raw/raw[3-5]
    chown oracle:dba /dev/sdg
    chown oracle:dba /dev/sdh
    chown oracle:dba /dev/sdi
    chmod 660 /dev/sdg
    chmod 660 /dev/sdh
    chmod 660 /dev/sdi
    

    Repeat this step on the node docrac2.

Configuring Raw Devices on Red Hat Enterprise Linux 4.0

Starting with the 2. 6 Linux kernel distributions, raw devices are not supported by default in the kernel. However, Red Hat Enterprise Linux 4.0 continues to provide raw device support.

To configure raw devices if you are using Red Hat Enterprise Linux 4.0:

  1. To confirm that raw devices are enabled, enter the following command:

    # chkconfig --list
    
  2. Scan the output for raw devices. If you do not find raw devices, then use the following command to enable the raw device service:

    # chkconfig --level 345 rawdevices on
    
  3. After you confirm that the raw devices service is running, you should change the default ownership of raw devices. When you restart a Red Hat Enterprise Linux 4.0 system, ownership and permissions on raw devices revert by default to the root user. If you are using raw devices with this operating system for your Oracle Clusterware files, then you need to override this default.

    To ensure correct ownership of these devices when the operating system is restarted, create a new file in the /etc/udev/permissions.d directory, called oracle.permissions, and enter the raw device permissions information. Using the example device names discussed in step 5 of the previous section, the following is an example of the contents of /etc/udev/permissions.d/oracle.permissions:

    # OCR
    raw/raw[12]:root:oinstall:0640
    # Voting Disks
    raw/raw[3-5]:oracle:oinstall:0640
    # ASM
    raw/raw[67]:oracle:dba:0660
    
  4. After creating the oracle.permissions file, the permissions on the raw devices are set automatically the next time the system is restarted. To set permissions to take effect immediately, without restarting the system, use the chown and chmod commands:

    chown root:oinstall /dev/raw/raw[12]
    chmod 640 /dev/raw/raw[12]
    chown oracle:oinstall /dev/raw/raw[3-5]
    chmod 640 /dev/raw/raw[3-5]
    chown oracle:dba /dev/raw/raw[67]
    chmod 660 /dev/raw/raw[67]
    

Choosing an Oracle Base Directory

OUI creates the Oracle base directory for you in the location you specify. The Oracle base directory (ORACLE_BASE) acts as a top-level directory for Oracle software installations. Optimal Flexible Architecture (OFA) guidelines recommend that you use a path similar to the following for the Oracle base directory:

/mount_point/app/oracle

In the preceding path example, the variable mount_point is the mount point directory for the file system where you intend to install the Oracle software.

The file system that you use for the Oracle base directory must have at least 1.5 GB of available disk space for installing the Oracle Database software. The path to the Oracle base directory must be the same on all nodes.

For Red Hat Linux systems, you can use the df -h command to determine the available disk space on each mounted file system. Choose a file system that has sufficient available space. For the sample installation described in this guide, the chosen mount point must have at least 3 GB of available space, for installing Oracle RAC and Oracle ASM in separate home directories. The examples in this guide use /opt/oracle/10gR2 for the Oracle base directory.

Choosing an Oracle Clusterware Home Directory

Oracle Universal Installer (OUI) installs Oracle Clusterware into a directory structure referred to as CRS_home. This home is separate from the home directories for other Oracle products installed on the same server. OUI creates the Oracle Clusterware home directory for you. Before you start the installation make sure that you have sufficient disk space on a file system for the Oracle Clusterware directory, and that the Oracle Clusterware home directory is owned by root.

The file system that you use for the Oracle Clusterware home directory must have at least 120 MB of available disk space. The path to the Oracle Clusterware home directory must be the same on all nodes.

For Red Hat Linux, you can use the df -h command to determine the available disk space on each mounted file system. Choose a file system that has appropriate available space. For the examples in this guide, the directory /opt/oracle/crs will be used for the Oracle Clusterware home directory.

Note:

Ensure the Oracle Clusterware home directory is not a subdirectory of the ORACLE_BASE directory.