Skip Headers

Oracle® Application Server 10g Installation Guide
10g (9.0.4) for Solaris Operating System (SPARC)
Part No. B10427-01
  Go To Documentation Library
Home
Go To Table Of Contents
Contents
Go To Index
Index

Previous Next  

9 Installing in High Availability Environments

This chapter describes how to install OracleAS Infrastructure 10g in the following high availability environments:

Section 9.1, "Requirements for High Availability Environments" describes requirements applicable for these high availability environments.

9.1 Requirements for High Availability Environments

This section describes the requirements that you have to meet before you can install Oracle Application Server in an OracleAS Active Failover Cluster or OracleAS Cold Failover Cluster environment. In addition to these common requirements, each environment has its own specific requirements. See the individual sections for details.


Note:

You still need to meet the requirements listed in Chapter 4, " Requirements", plus requirements specific to the high availability environment that you plan to use.

The common requirements are:

9.1.1 Check Minimum Number of Nodes

You need at least two nodes in a high availability environment. If a node fails for any reason, the second node takes over.

9.1.2 Check That Clusterware Is Running

Each node in a cluster must be running clusterware such as Sun Cluster, VERITAS Cluster Server, or Fujitsu-Siemens PrimeCluster. For the official list of certified clusterware, visit the Certify section of OracleMetaLink (http://metalink.oracle.com).

To check that the clusterware is running, use the command appropriate for your clusterware. For example, if you are running Sun Cluster, use the scstat command to get the status of the nodes in the cluster.

9.1.3 Check That Groups Are Defined Identically on All Nodes

Check that the /etc/group file on all nodes in the cluster contains the operating system groups that you plan to use. You should have one group for the oraInventory directory, and one or two groups for database administration. The group names and the group IDs must be the same for all nodes.

See Section 4.6, "Operating System Groups" for details.

9.1.4 Check the Properties of the oracle User

Check that the oracle operating system user, which you log in as to install Oracle Application Server, has the following properties:

  • Belongs to the oinstall group and to the osdba group. The oinstall group is for the oraInventory directory, and the osdba group is a database administration group. See Section 4.6, "Operating System Groups" for details.

  • Has write privileges on remote directories.

9.1.5 Check That Oracle UNIX Distributed Lock Manager Is Version 3.3.4.5 or Higher


Note:

  • For OracleAS Active Failover Cluster, this step is required.

  • For OracleAS Cold Failover Cluster, this step is recommended, but not required.


Ensure that all the clustered nodes on which you plan to install Oracle Application Server are running Oracle UNIX Distributed Lock Manager (ORCLudlm) version 3.3.4.5 or higher.

If the computers are not running ORCLudlm version 3.3.4.5 or higher, the cluster reconfiguration process may hang and cause all nodes in the cluster to stop providing database services.

To check, run the following command:

prompt> pkginfo -l ORCLudlm |grep VERSION

If you do not have ORCLudlm version 3.3.4.5 or higher, or if you do not have ORCLudlm at all, you can install it from Oracle Application Server Disk 1. You need to install it on all nodes in the cluster.

If you have Sun Cluster 3.0, follow these steps to install ORCLudlm:

  1. If the cluster node is running an earlier version of ORCLudlm, shut down all clients of the existing ORCLudlm.

  2. Become the root user and reboot the cluster node in non-cluster mode.

    prompt> su
    Password: root_password
    # scswitch -S -h nodename
    # shutdown -g 0 -y
    ... Wait for the "ok" prompt.
    ok boot -x
    
    
  3. Copy 904disk1/racpatch/ORCLudlm.tar.Z to a directory on your computer.

    The following example copies it to the /opt/oracle directory:

    prompt> cp <cdrom_mount_point>/904disk1/racpatch/ORCLudlm.tar.Z /opt/oracle
    
    
  4. Uncompress and extract the file.

    prompt> cd /opt/oracle
    prompt> zcat ORCLudlm.tar.Z | tar xvf -
    
    
  5. Install the package. You need to run the pkgadd command as the root user.

    prompt> su
    Password: root_password
    # cd /opt/oracle
    # pkgadd -d . ORCLudlm
    
    

    This gives a message like the following. In this example, type ’1’ at the end of the message to select and install the ORCLudlm package.

    The following packages are available:
    1  ORCLudlm  Oracle UNIX Distributed Lock Manager
                 (sparc) Dev Release 01/10/03, 3.3.4.6
    
    Select package(s) you wish to process (or ’all’ to process all packages). (default: all) [?, ??, q]: 1
    
    
  6. After you have installed ORCLudlm successfully, reboot the cluster node in cluster mode.

    prompt> su
    Password: root_password
    # shutdown -g 0 -y -i 6
    
    

On Sun Cluster 3.0, the ORCLudlm configuration file is:
/etc/opt/SUNWcluster/conf/udlm.conf.

On Sun Cluster 3.0, the ORCLudlm log file is:
/var/cluster/ucmm/dlm_<nodename>/logs/dlm.log.

9.1.6 Check for Previous Oracle Installations on All Nodes

Check that all the nodes where you want to install Oracle Application Server in a high availability configuration do not have existing oraInventory directories.

You need to do this because you want the installer to prompt you to enter a location for the oraInventory directory. The location of the existing oraInventory directory might not be ideal for the Oracle Application Server instance that you are about to install. For example, in OracleAS Cold Failover Cluster, you want the oraInventory directory to be on the shared storage. If the installer finds an existing oraInventory directory, it will automatically use it and will not prompt you to enter a location.

To check if a node contains an oraInventory directory that could be detected by the installer:

  1. On each node, check for the /var/opt/oracle/oraInst.loc file.

    If a node does not contain the file, then it does not have an oraInventory directory that will be used by the installer. You can check the next node.

  2. For nodes that contain the oraInst.loc file, rename the oracle directory to something else so that the installer does not see it. The installer then prompts you to enter a location for the oraInventory directory.

    The following example renames the oracle directory to oracle.orig (you need to be root to do this):

    prompt> su
    Password: root_password
    # cd /var/opt
    # mv oracle oracle.orig
    
    

When you run the installer to install Oracle Application Server, the installer creates a new /var/opt/oracle directory and new files in it. You might need both oracle and oracle.orig directories. Do not delete either one or rename one over the other.

The installer uses the /var/opt/oracle directory and its files. Be sure that the right oracle directory is in place before running the installer (for example, if you are deinstalling or expanding a product).

9.2 OracleAS Cold Failover Cluster

An OracleAS Cold Failover Cluster environment (Figure 9-1) consists of:

During normal operation, node 1, which is the primary node, is the active node. It mounts the shared storage to access the OracleAS Infrastructure 10g files, runs OracleAS Infrastructure 10g processes, and handles all requests.

If node 1 goes down for any reason, the clusterware fails over the OracleAS Infrastructure 10g processes on node 1 to node 2. Node 2 becomes the active node, mounts the shared storage, runs the processes, and handles all requests.

To access the active node in an OracleAS Cold Failover Cluster, clients, including middle tier components and applications, use the virtual hostname associated with the OracleAS Cold Failover Cluster. The virtual hostname is associated with the active node (node 1 during normal operation, node 2 if node 1 goes down). Clients do not need to know which node (primary or secondary) is servicing requests.

You also use the virtual hostname in URLs that access the infrastructure. For example, if vhost.oracle.com is the name of the virtual host, the URLs for the Oracle HTTP Server and the Application Server Control would look like the following:

URL for: Example URL
Oracle HTTP Server Welcome page http://vhost.oracle.com:7777
Oracle HTTP Server, secure mode https://vhost.oracle.com:4443
Application Server Control
http://vhost.oracle.com:1810

Figure 9-1 OracleAS Cold Failover Cluster Environment

Description of cfc.gif follows
Description of the illustration cfc.gif

The rest of this section describes these procedures:

9.2.1 Setting up an OracleAS Cold Failover Cluster Environment

Before you can install OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster, perform these procedures:

Also, ensure that you meet the requirements described in Section 9.1, "Requirements for High Availability Environments".

9.2.1.1 Verify Additional Clusterware Requirements

In an OracleAS Cold Failover Cluster environment, for a node to automatically fail over to another node, you need the appropriate OracleAS Infrastructure 10g clusterware agent from the clusterware vendor. Without the agent, you have to fail over manually. Table 9-1 lists some clusterware agents:

Table 9-1 Examples of Clusterware Agents for OracleAS Cold Failover Cluster

Clusterware Agent
Sun Cluster Sun Cluster Data Service
VERITAS Cluster Server Service Group

Check the Certify section of OracleMetaLink (http://metalink.oracle.com) for the availability of the clusterware agent for your clusterware.

9.2.1.2 Map the Virtual Hostname and Virtual IP Address

Each node in an OracleAS Cold Failover Cluster environment is associated with its own physical IP address. In addition, the active node in the cluster is associated with the virtual hostname and virtual IP address. This allows clients to access the OracleAS Cold Failover Cluster using the virtual hostname.

Virtual hostnames and virtual IP addresses are any valid hostname and IP address in the context of the subnet containing the hardware cluster.


Note:

You map the virtual hostname and virtual IP address only to the active node. Do not map the virtual hostname and IP address to both active and secondary nodes at the same time. When you failover, only then do you map the virtual hostname and IP address to the secondary node, which is now the active node.

The following example configures a virtual hostname called vhost.oracle.com, with a virtual IP of 138.1.12.191:

  1. Become the root user.

    prompt> su
    Password: root_password
    
    
  2. Determine the public network interface.

    # ifconfig -a
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=1008849<UP,LOOPBACK,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8232 index 1
            inet 172.16.193.1 netmask ffffffff
    ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.13.146 netmask fffffc00 broadcast 138.1.15.255
            ether 8:0:20:fd:1:23
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 8:0:20:fd:1:23
    hme0:2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
    ge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 8:0:20:fd:1:23
    
    

    From the output, ge0 is the public network interface. It is not a loopback interface and not a private interface.

  3. Add the virtual IP to the ge0 network interface.

    # ifconfig ge0 addif 138.1.12.191 up
    
    

    "ge0" and the IP address are values specific to this example. Replace them with values appropriate for your cluster.

  4. Check that new interface was added:

    # ifconfig -a
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=1008849<UP,LOOPBACK,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 8232 index 1
            inet 172.16.193.1 netmask ffffffff
    ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.13.146 netmask fffffc00 broadcast 138.1.15.255
            ether 8:0:20:fd:1:23
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.12.191 netmask ffff0000 broadcast 138.1.255.255
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
            ether 8:0:20:fd:1:23
    hme0:2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
            inet 172.16.194.6 netmask fffffffc broadcast 172.16.194.7
    ge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
            inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
            ether 8:0:20:fd:1:23
    
    

    The virtual IP appears in the ge0:1 entry. During installation, when you enter "vhost.oracle.com" as the virtual hostname in the Specify High Availability Addressing screen, the installer checks that "vhost.oracle.com" is a valid interface.


On Failover

If the active node fails, then the secondary node takes over. If you do not have a clusterware agent to map the virtual IP from the failed node to the secondary node, then you have to do it manually. You have to remove the virtual IP mapping from the failed node, and map it to the secondary node.

  1. On the failed node, become superuser and remove the virtual IP.

    If the failed node fails completely (that is, it does not boot up), you can skip this step and go to step 2. If the node fails partially (for example, disk or memory problems), and the node is still ping-able, you have to perform this step.

    prompt> su
    Password: root_password
    # ifconfig ge0 removeif 138.1.12.191
    
    

    "ge0" and the IP address are values specific to this example. Replace them with values appropriate for your cluster.

  2. On the secondary node, add the virtual IP to the ge0 network interface.

    # ifconfig ge0 addif 138.1.12.191 up
    
    

    "ge0" and the IP address are values specific to this example. Replace them with values appropriate for your cluster.

  3. On the secondary node, check that the new interface was added:

    # ifconfig -a
    ...
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
            inet 138.1.12.191 netmask ffff0000 broadcast 138.1.255.255
    ...
    

9.2.1.3 Set Up a File System That Can Be Mounted from Both Nodes

Although the hardware cluster has shared storage, you need to create a file system on this shared storage such that both nodes of the OracleAS Cold Failover Cluster can mount this file system. On this file system, you place the following directories:

  • OracleAS Infrastructure 10g

  • The oraInventory directory and the jre/1.1.8 directory. The installer automatically installs the jre directory at the same level as the oraInventory directory.

    For example, if you specify /mnt/app/oracle/oraInventory as the oraInventory directory, the installer installs the jre directory as /mnt/app/oracle/jre. The installer installs the 1.1.8 directory within the jre directory.

For disk space requirements for OracleAS Infrastructure 10g, see Section 4.1, "System Requirements".

If you are running a volume manager on the cluster to manage the shared storage, refer to the volume manager documentation for steps to create a volume. Once a volume is created, you can create the file system on that volume.

If you do not have a volume manager, you can create a file system on the shared disk directly. Ensure that the hardware vendor supports this, that the file system can be mounted from either node of the OracleAS Cold Failover Cluster, and that the file system is repairable from either node in case of a crash.

To check that the file system can be mounted from either node, do the following steps:

  1. Set up and mount the file system from node 1.

  2. Unmount the file system from node 1.

  3. Mount the file system from node 2 using the same mount point that you used in step 1.

  4. Unmount it from node 2, and mount it on node 1, because you will be running the installer from node 1.


Note:

Only one node of the OracleAS Cold Failover Cluster should mount the file system at any given time. File system configuration files on all nodes of the cluster should not include an entry for the automatic mount of the file system upon a node reboot or execution of a global mount command. For example, on UNIX platforms, do not include an entry for this file system in /etc/fstab file.

9.2.2 Installing OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster

For the OracleAS Cold Failover Cluster solution, you must install both the OracleAS Metadata Repository and the Identity Management components on the same computer at the same time by selecting the Identity Management and OracleAS Metadata Repository option in the Select Installation Type screen. This option creates a new database for the OracleAS Metadata Repository and a new Oracle Internet Directory.


Note:

For the OracleAS Cold Failover Cluster solution, you must install a new database (for the OracleAS Metadata Repository) and Oracle Internet Directory. You cannot use an existing database or Oracle Internet Directory for OracleAS Cold Failover Cluster solutions.

Follow this procedure to install OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster environment:

Table 9-2 Steps for Installing OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster


Screen Action
1. -- Start up the installer. See Section 5.15, "Starting theOracle Universal Installer" for details.
2. Welcome Click Next.
3. Specify Inventory Directory This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the full path for the inventory directory: Enter a full path to a directory where you want the installer to store its files. The installer uses these files to keep track of all Oracle products that are installed on this computer. Enter a directory that is different from the Oracle home directory.

Note: You must enter a directory in the file system that can be mounted from either node in the OracleAS Cold Failover Cluster environment.

Example: /mnt/app/oracle/oraInventory

Click OK.

4. UNIX Group Name This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the name of the operating system group to have write permission for the oraInventory directory.

Example: oinstall

Click Next.

5. Run orainstRoot.sh This screen appears only if this is the first installation of any Oracle product on this computer.

Run the orainstRoot.sh script in a different shell as the root user. The script is located in the oraInventory directory.

Click Continue.

6. Specify File Locations Name: Enter a name to identify this Oracle home.

Example: OH_INFRA_904

Destination Path: Enter the full path to the destination directory. This is the Oracle home.

Notes:

  • You must enter a directory in the file system that can be mounted from either node in the OracleAS Cold Failover Cluster environment.

  • You must enter a new Oracle home name and directory. Do not select an existing Oracle home from the drop down list. If you select an existing Oracle home, the installer will not display the next screen, Specify Hardware Cluster Installation Mode.

Example: /mnt/app/oracle/OraInfra_904

Click Next.

7. Specify Hardware Cluster Installation Mode Select Single Node or Cold Failover Cluster Installation because you are installing OracleAS Infrastructure 10g on the shared storage. Click Next.

If you do not see this screen, the installer was not able to determine that the current node is running a clusterware (see Section 9.1.2, "Check That Clusterware Is Running"). You have two choices:

  • Continue the installation. This screen is optional for installing in OracleAS Cold Failover Cluster environments. You just need to select High Availability Addressing in the Select Configuration Options screen in step 12. Also, ensure that your clusterware is running.

  • Exit the installer, and install Oracle UDLM as documented in Section 9.1.5, "Check That Oracle UNIX Distributed Lock Manager Is Version 3.3.4.5 or Higher", then restart the installation. This screen should appear.

8. Select a Product to Install Select OracleAS Infrastructure 10g to install an infrastructure.

If you need to install additional languages, click Product Languages. See Section 5.6, "Installing Additional Languages" for details.

Click Next.

9. Select Installation Type Select Identity Management and OracleAS Metadata Repository. Click Next.
10. Preview of Steps for Infrastructure Installation This screen lists the screens that the installer will display. Click Next.
11. Confirm Pre-Installation Requirements Verify that you meet all the listed requirements. Click Next.
12. Select Configuration Options Select Oracle Internet Directory.

Select OracleAS Single Sign-On.

Select Delegated Administration Services.

Select Oracle Directory Integration and Provisioning

Do not select OracleAS Certificate Authority. OracleAS Certificate Authority is not supported in OracleAS Cold Failover Cluster environments.

Select High Availability Addressing. If the installer displayed the Specify Hardware Cluster Installation Mode screen earlier, this option is greyed out and selected by default.

If the installer did not display the Specify Hardware Cluster Installation Mode screen, the High Availability Addressing option will not be greyed out. You must select this option.

Click Next.

13. Specify Namespace in Internet Directory Select the suggested namespace, or enter a custom namespace for the location of the default Identity Management realm.

Ensure the value shown in Suggested Namespace meets your deployment needs. If not, enter the desired value in Custom Namespace. See Section 6.15, "What Do I Enter in the "Specify Namespace in Internet Directory" Screen?".

Click Next.

14. Specify High Availability Addressing Note: This is a critical screen when installing the infrastructure in an OracleAS Cold Failover Cluster. If you do not see this screen, return to the Select Configuration Options screen and ensure that you selected High Availability Addressing.

Enter the virtual hostname for the OracleAS Cold Failover Cluster environment.

Example: vhost.oracle.com

Click Next.

15. Specify Privileged Operating System Groups This screen appears if you are running the installer as a user who is not in the OSDBA or the OSOPER operating system groups.

Database Administrator (OSDBA) Group:

Example: dbadmin

Database Operator (OSOPER) Group:

Example: dbadmin

Click Next.

16. Database Identification Global Database Name: Enter a name for the OracleAS Metadata Repository database. Append the domain name of your computer to the database name.

Example: asdb.oracle.com

SID Prefix: Enter the system identifier for the OracleAS Metadata Repository database. Typically this is the same as the global database name, but without the domain name. The SID cannot be longer than eight characters.

Example: asdb

Click Next.

17. Set SYS and SYSTEM Passwords Set the passwords for these database users. Click Next.
18. Database File Location Enter or select a directory for database files: Enter a directory where you want the installer to create data files for the OracleAS Metadata Repository database.

Note: You must enter a directory in the file system that can be mounted from either node in the OracleAS Cold Failover Cluster environment.

Click Next.

19. Database Character Set Select Use the default character set. Click Next.
20. Specify Instance Name and ias_admin Password Instance Name: Enter a name for this infrastructure instance. Instance names can contain the $ and _ (underscore) characters in addition to any alphanumeric characters. If you have more than one Oracle Application Server instance on a computer, the instance names must be unique.

Example: infra_904

ias_admin Password and Confirm Password: Enter and confirm the password for the ias_admin user. This is the administrative user for this infrastructure instance.

See Section 5.8, "The ias_admin User and Restrictions on its Password" for password requirements.

Example: welcome99

Click Next.

21. -- Finish the installation. See Section 6.25, "Install Fragment: The Last Few Screens of the Installation" for details.

9.2.3 Performing Post-Installation Steps for OracleAS Cold Failover Cluster

9.2.3.1 Copy the /var/opt/oracle Directory to the Other Node

After the OracleAS Infrastructure 10g installation is complete, copy the /var/opt/oracle directory from the node where you performed the installation to the other node in the OracleAS Cold Failover Cluster. This ensures that you can run the installer to update the OracleAS Infrastructure 10g from either node in the cluster.

Be sure to keep the two /var/opt/oracle directories in sync. Whenever you run the installer to update the infrastructure, you need to copy the oracle directory to the other node.

The /var/opt/oracle directory is not used during runtime by Oracle Application Server. It is used only by the installer.

9.2.3.2 Create a Clusterware Agent for Automatic Failover

An OracleAS Cold Failover Cluster environment provides the framework for a manual failover of OracleAS Infrastructure 10g. To achieve automatic failover, you must set up an agent using the clusterware. An example of automatic failover is setting up the secondary node to monitor the heart beat of the primary node and when the secondary node detects that the primary node is down, the virtual IP address, shared storage, and all the OracleAS Infrastructure 10g processes are failed over to the secondary node.

For examples of these agents, refer to the clusterware certification page at OracleMetaLink (http://metalink.oracle.com).

9.2.4 Installing Middle Tiers Against an OracleAS Cold Failover Cluster Infrastructure

For middle tiers to work with an OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster, you can install the middle tiers on computers outside the cluster, or on nodes within the cluster.

If you choose to install middle tiers on OracleAS Cold Failover Cluster nodes, either on the local storage or shared storage, note that the middle tiers will not be able to take advantage of any cluster benefits. If the active node fails, the middle tiers will not fail over to the other node. Middle tiers have their own high availability solutions: see the Oracle Application Server 10g High Availability Guide for details.


Note:

Oracle recommends that you do not install middle tiers on the same shared disk where you installed the OracleAS Infrastructure 10g. The reason is that when this shared disk fails over to the secondary node, the middle tier becomes inaccessible.

The best solution is to install and run middle tiers on nodes outside the OracleAS Cold Failover Cluster.

But if you want to run a middle tier on either the primary or secondary node, install it on a local disk or on a disk other that the one where you installed the OracleAS Infrastructure 10g.


9.2.4.1 If You Plan to Install Middle Tiers on OracleAS Cold Failover Cluster Nodes

If you plan to install a middle tier on an OracleAS Cold Failover Cluster node (primary or secondary), perform these tasks before installing the middle tier:

9.2.4.1.1 Create a staticports.ini File for the Middle Tier

Ensure that the ports used by the middle tier are not the same as the ports used by the infrastructure. The reason is that the infrastructure can fail over from the primary to the secondary node (and vice versa), and there must not be any port conflicts on either node. The same ports must be reserved for the infrastructure on both nodes.

If the infrastructure is running on the same node where you want to install the middle tier, the installer can detect which ports are in use and select different ports for the middle tier. For example, if the infrastructure is running on the primary node, and you run the installer on the primary node to install the middle tier, then the installer can assign different ports for the middle tier.

However, if the infrastructure is running on a node different from where you want to install the middle tier, the installer cannot detect which ports are used by the infrastructure. For example, if the infrastructure is running on the primary node but you want to install the middle tier on the secondary node, the installer is unable to detect which ports the infrastructure is using. In this situation, you need to set up a staticports.ini file to specify port numbers for the middle tier. See Section 4.5.2, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.

To see which ports the infrastructure is using, view the ORACLE_HOME/install/portlist.ini file, where ORACLE_HOME refers to the directory where you installed the infrastructure.

9.2.4.1.2 Rename the /var/opt/oracle Directory Used for the Infrastructure

Set up the environment so that the middle tier will have its own inventory directory, instead of using the same inventory directory used by the infrastructure. To do this, you need to rename the /var/opt/oracle directory to something else so that the installer will prompt you to enter a new inventory directory. The following example renames it to oracle.infra.

prompt> su
Password: root_password
# cd /var/opt
# mv oracle oracle.infra

When the installer prompts for the inventory directory, specify a directory on the local storage or on a disk other than the one where you installed the OracleAS Infrastructure 10g.

When the middle tier installation is complete, do the following rename operations:

prompt> su
Password: root_password
# cd /var/opt
# mv oracle oracle.mt       see (1)
# mv oracle.infra oracle    see (2)

(1) This command renames the oracle directory created by the installer when it installed the middle tier.

(2) This command renames the oracle.infra directory back to oracle.

The /var/opt/oracle directory is not used during Oracle Application Server runtime. The only time you need it is when you run the installer (for example, to de-install an instance or to expand an instance).

Be sure the correct oracle directory is in place before you run the installer.

9.2.4.2 Procedure for Installing Middle Tiers Against an OracleAS Cold Failover Cluster Infrastructure

To install middle tiers to work with an OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster, follow the procedures as documented in Chapter 7, " InstallingMiddleTiers", but with these differences:

  • In the Register with Oracle Internet Directory screen, enter the virtual hostname in the Hostname field.

  • If you are installing the middle tier on an OracleAS Cold Failover Cluster node, you must follow these additional requirements:

9.3 OracleAS Active Failover Cluster


Note:

In the initial release of Oracle Application Server 10g (9.0.4), OracleAS Active Failover Cluster is a Limited Release feature. Please check OracleMetaLink (http://metalink.oracle.com) for the most current certification status of this feature or consult your sales representative before deploying this feature in a production environment.

You increase the availability of OracleAS Infrastructure 10g by installing and running it in an OracleAS Active Failover Cluster environment (Figure 9-2). In an OracleAS Active Failover Cluster, the OracleAS Metadata Repository runs on a Real Application Clusters database, and the Identity Management components run on the same nodes in the cluster.

To create this environment, you install the OracleAS Infrastructure 10g components—OracleAS Metadata Repository and Identity Management components—in a clustered environment.

To use OracleAS Active Failover Cluster, you need the following items:


To Learn More About Real Application Clusters

For complete information about Real Application Clusters, see the following books in the database documentation library.

You can view these books on the Oracle Technology Network Web site (http://otn.oracle.com).


For the Latest News

There are some known issues related to OracleAS Active Failover Cluster. These issues are documented in the Oracle Application Server 10g Release Notes.

Figure 9-2 OracleAS Active Failover Cluster Environment

Description of rac.gif follows
Description of the illustration rac.gif


Components You Need to Install

You need to install OracleAS Infrastructure 10g components on the clustered nodes. This means that you cannot use an existing database, or an existing Oracle Internet Directory. You need to have the installer create a new database and Oracle Internet Directory for you.

On the Select Installation Type screen, you need to select Identity Management and OracleAS Metadata Repository. You cannot select Identity Management only, because this is not a supported option for OracleAS Active Failover Cluster.


Adding Nodes After Installation

After you install OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster, you cannot install it on additional nodes after the initial installation. You must select all the nodes in the cluster where you want to install OracleAS Infrastructure 10g during the initial installation.


Where the Installer Writes Files

You run the installer on any node in the OracleAS Active Failover Cluster where you want to install OracleAS Infrastructure 10g. The installer detects that the node is part of a cluster, and it displays a screen listing all the nodes in the cluster. On this screen, you select the nodes where you want to install OracleAS Infrastructure 10g. The node where you are running the installer is always selected.

The installer writes files on the local storage devices of the selected nodes and also on the shared storage device, as shown in Table 9-3:

Table 9-3 Where the Installer Writes Files in an OracleAS Active Failover Cluster

File or Directory Location
ORACLE_HOME directory The installer writes the Oracle home directory on the local storage devices of the selected nodes. The installer uses the same path name, specified in the Specify File Locations screen, for all nodes.
oraInventory directory The installer writes the oraInventory directory on the local storage devices of the selected nodes. The installer uses the same path name, specified in the Specify Inventory Directory screen, for all nodes.
Files for OracleAS Metadata Repository The installer writes the database software files for the OracleAS Metadata Repository on all the selected nodes, but for the data files, the installer invokes the Database Configuration Assistant to write the data files on raw devices located on the shared storage device.

The rest of this section describes these procedures:

9.3.1 Setting Up the OracleAS Active Failover Cluster Environment

Before you install the OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment, perform the following procedures:

9.3.1.1 Set Up staticports.ini File

Each OracleAS Infrastructure 10g component must use the same port number across all nodes in the cluster. To do this, create a staticports.ini file, which enables you to specify port numbers for each component. See Section 4.5.2, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.


Note:

The installer checks the availability of the ports specified in the staticports.ini file on the local node only. It does not check that the ports are free on the remote nodes. You must check yourself that these ports are free on all the nodes.

9.3.1.2 Set Up a Virtual Server Name for the Load Balancer

You enter the load balancer’s virtual server name, and not the load balancer’s physical hostname, when the installer prompts for the load balancer name. See your load balancer documentation for steps on how to set up a virtual server name.

See the next point, Section 9.3.1.3, "Verify the Load Balancer’s Virtual Server Name Does Not Contain the Names of the Nodes in the Cluster", for guidelines on the virtual server name.

After the virtual server name is set up, check that the name is accessible:

prompt> ping load_balancer_virtual_name

9.3.1.3 Verify the Load Balancer’s Virtual Server Name Does Not Contain the Names of the Nodes in the Cluster

When the installer copies files to different nodes in the cluster, it replaces the current hostname in the files with hostname of the target node. Ensure that the load balancer’s virtual server name does not contain the names of the nodes in the cluster, or the installer might change the virtual server name of the load balancer as well.

Example: if you are installing on nodes named rac-1 and rac-2, be sure that the load balancer virtual server name does not contain "rac-1" or "rac-2". When the installer is installing files to rac-2, it searches for the string "rac-1" in the files and replaces it with "rac-2". If the load balancer’s virtual server name happens to be LB-rac-1x, the installer sees the string "rac-1" in the name and replaces it with "rac-2", thus mangling the virtual server name to LB-rac-2x.

9.3.1.4 Configure the Load Balancer to Point to One Node Only

You need to configure the load balancer so that it directs all traffic only to the node where you will be running the installer. After installation, you change the configuration back so that the load balancer directs traffic to all nodes in the cluster.

9.3.1.5 Create Identical Users and Groups on All Nodes in the Cluster


Note:

This procedure is required only if you are using local users and groups. It is not required if you are using users and groups defined in a directory service, such as NIS, because the users and groups are already identical.

Create an operating system user with the same user ID on all nodes in the cluster. This is required for user equivalence to work (see Section 9.3.1.6, "Set Up User Equivalence"). When you run the installer on one node as this user, the installer needs to access the other nodes in the cluster as this user.

If you have already created the oracle user as described in Section 4.7, "Operating System User", determine its user ID so that when you create the oracle user on other nodes, you can specify the same user ID.

To determine the user ID:

prompt> id -a oracle
uid=3223(oracle) gid=8400(dba) gid=5000(oinstall)

The number after "uid" specifies the user ID, and the numbers after "gid" specify the group IDs. In this example, the oracle user must have ID 3223 on all nodes, and the dba and oinstall groups must have IDs 8400 and 5000 on all nodes.

See Section 4.7, "Operating System User" and Section 4.6, "Operating System Groups" for steps on how to create users and groups.

9.3.1.6 Set Up User Equivalence

The installer needs user equivalence to be set up for all the nodes in the cluster. You can set up secure shell (ssh and scp) for user equivalence, or you can perform the steps in this section to enable user equivalence for rsh and rcp.

To determine which user equivalence type to use, the installer checks if secure shell is set up. If so, it uses it. Otherwise, it uses rsh and rcp.

9.3.1.6.1 To Set Up User Equivalence for rsh and rcp

Perform the following steps:

  1. On the node where you plan to run the installer, in the following files:

    • .rhosts file in the home directory of the oracle user

    • .rhosts file in the home directory of the root user (that is, /.rhosts)

    enter a line for each node name in the cluster. Be sure to include the name of the local node itself.

    For example, if the cluster has three nodes named node1, node2, and node3, you would populate the .rhosts files with the following lines:

    node1
    node2
    node3
    

    Tip:

    Instead of writing these lines in the .rhosts files for the oracle user and for the root user, you can enter the same lines in the /etc/hosts.equiv file.

  2. Check that the user equivalence is working:

    1. Log in as the oracle user on the node where you plan to run the installer.

    2. As the oracle user, perform a remote login to each node in the cluster:

      prompt> rlogin node2
      
      

      If the command prompts you to enter a password, then the oracle user does not have identical attributes on all nodes. You need to correct this to enable the installer to copy files to the remote nodes.


Tip:

If user equivalence is not working, try modifying the .rhosts or the /etc/hosts.equiv files in the following ways to get it to work:
  • Specify the fully qualified hostname in the files:

    node1.oracle.com
    node2.oracle.com
    node3.oracle.com
    
    
  • Specify the username after the hostname. Separate the hostname from the username with a space character:

    node1.oracle.com oracle
    node2.oracle.com oracle
    node3.oracle.com oracle
    
    

    For the root user’s .rhosts file, replace "oracle" with "root".

  • You can include all these variations in the files:

    node1 oracle
    node1.oracle.com oracle
    node2 oracle
    node2.oracle.com oracle
    node3 oracle
    node3.oracle.com oracle
    
    

    For the root user’s .rhosts file, replace "oracle" with "root".


9.3.1.6.2 To Check if Secure Shell Is Configured

If you are using secure shell for host equivalency between the nodes of a cluster, make sure that the ssh and scp commands do not prompt for any user response, such as password, during execution. Check with your system administrator or the secure shell documentation for information on setting up secure shell.

After setting up secure shell, you can run these commands to check:

  • To check ssh, run these commands on each node in the cluster:

    prompt> ssh local_hostname ls /tmp
    prompt> ssh remote_hostname ls /tmp
    
    

    In the example, the ssh command runs the "ls /tmp" command on the local node and remote node. Replace local_hostname and remote_hostname with the name of the local node and remote node.

  • To check scp, run these commands on each node in the cluster:

    prompt> touch /tmp/testscp
    prompt> scp /tmp/testscp local_hostname:/tmp/testscp2
    prompt> scp /tmp/testscp remote_hostname:/tmp/testscp2
    
    

    In the example, the touch command creates a file called testscp in the /tmp directory, and the scp commands copy the file to another file (testscp2) in the same directory. Replace local_hostname with the name of the local node, and remote_hostname with the name of a remote node.

If the commands prompt for user response during installation, it means that the secure shell is not set up properly, and the installer resorts to using the equivalent rsh and rcp commands. You then need to perform the steps in Section 9.3.1.6.1, "To Set Up User Equivalence for rsh and rcp" for the installer to succeed.

9.3.1.7 Create a Raw Device or Shared File for Server Management (SRVM)

This step is required if this is the first installation of an Oracle database on the cluster. SRVM is a component of Real Application Clusters.

The raw device or shared file for SRVM must have these properties:

  • It must be accessible from all nodes in the cluster.

  • Its size must be at least 100 MB.

The command to create raw devices is specific to the volume manager you are using. For example, if you using VERITAS Volume Manager, the command is vxassist.

9.3.1.8 (optional) Set the SRVM_SHARED_CONFIG Environment Variable

If OracleAS Infrastructure 10g is the first Oracle product to be installed on the cluster, set the SRVM_SHARED_CONFIG environment variable to the name of the raw device or shared file that you created for the SRVM shared configuration device.

Example (C shell)

% setenv SRVM_SHARED_CONFIG /dev/vx/rdsk/ias_dg/srvcfg

Example (Bourne or Korn shell):

$ SRVM_SHARED_CONFIG=/dev/vx/rdsk/ias_dg/srvcfg; export SRVM_SHARED_CONFIG

If you do not set this environment variable, the installer displays the Shared Configuration File Name screen, where you enter the path for the SRVM configuration device.

9.3.1.9 Create Raw Devices for the OracleAS Metadata Repository

In addition to the raw device for SRVM (see Section 9.3.1.7, "Create a Raw Device or Shared File for Server Management (SRVM)"), you need to configure shared storage device as raw devices for the OracleAS Metadata Repository database.

Table 9-4 lists the required tablespaces and system objects, their minimum sizes, and the recommended name for the raw device:

Table 9-4 Raw Devices for the OracleAS Metadata Repository

Raw Device for Minimum Size Recommended Name
SYSTEM tablespace 1024 MB dbname_raw_system_1024m
Server parameter file 64 MB dbname_raw_spfile_64m
TEMP tablespace 128 MB dbname_raw_temp_128m
UNDOTBS1 tablespace 256 MB dbname_raw_undotbs1_256m
UNDOTBS2 tablespace 256 MB dbname_raw_undotbs2_256m
DRSYS tablespace 64 MB dbname_raw_drsys_64m
Three control files 64 MB for each file dbname_raw_controlfile1_64m

dbname_raw_controlfile2_64m

dbname_raw_controlfile3_64m

Three redo log files for each instance 64 MB for each file dbname_raw_thread_lognumber_64m

thread specifies the thread ID of the instance.

number specifies the log number (1, 2, or 3) of the instance.

PORTAL tablespace 128 MB dbname_raw_portal_128m
PORTAL_DOC tablespace 64 MB dbname_raw_portaldoc_64m
PORTAL_IDX tablespace 64 MB dbname_raw_portalidx_64m
PORTAL_LOG tablespace 64 MB dbname_raw_portallog_64m
DCM tablespace 256 MB dbname_raw_dcm_256m
OCATS tablespace 64 MB dbname_raw_ocats_64m
DISCO_PTM5_CACHE tablespace 64 MB dbname_raw_discoptm5cache_64m
DISCO_PTM5_META tablespace 64 MB dbname_raw_discoptm5meta_64m
DSGATEWAY_TAB tablespace 64 MB dbname_raw_dsgatewaytab_64m
WCRSYS_TS tablespace 64 MB dbname_raw_wcrsysts_64m
UDDISYS_TS tablespace 64 MB dbname_raw_uddisysts_64m
OLTS_ATTRSTORE tablespace 128 MB dbname_raw_oltsattrstore_128m
OLTS_BTTRSTORE tablespace 64 MB dbname_raw_oltsbttrstore_64m
OLTS_CT_STORE tablespace 256 MB dbname_raw_oltsctstore_256m
OLTS_DEFAULT tablespace 128 MB dbname_raw_oltsdefault_128m
OLTS_SVRMGSTORE tablespace 64 MB dbname_raw_oltssvrmgstore_64m
IP_DT tablespace 128 MB dbname_raw_ipdt_128m
IP_RT tablespace 128 MB dbname_raw_iprt_128m
IP_LOB tablespace 128 MB dbname_raw_iplob_128m
IP_IDX tablespace 128 MB dbname_raw_ipidx_128m
IAS_META tablespace 256 MB dbname_raw_iasmeta1_256m

9.3.1.10 Create a Text File Listing the Raw Devices

Create a text file listing the database object and raw device name in name-value pair format. Place the text file on the node where you plan to run the installer.

The following example shows the contents of the text file for a two-instance OracleAS Metadata Repository. If you have more than two instances, add more lines for "undotbs" and the redo log files.

system1=/dev/vx/rdsk/ias_dg/infra_system_1024m
spfile1=/dev/vx/rdsk/ias_dg/infra_raw_spfile_64m
temp1=/dev/vx/rdsk/ias_dg/infra_raw_temp_128m
undotbs1=/dev/vx/rdsk/ias_dg/infra_raw_undotbs1_256m
undotbs2=/dev/vx/rdsk/ias_dg/infra_raw_undotbs2_256m
..... Create additional lines for "undotbsN" if you have more than 2 instances.
drsys1=/dev/vx/rdsk/ias_dg/infra_raw_drsys_64m
control1=/dev/vx/rdsk/ias_dg/infra_raw_controlfile1_64m
control2=/dev/vx/rdsk/ias_dg/infra_raw_controlfile2_64m
control3=/dev/vx/rdsk/ias_dg/infra_ raw_controlfile3_64m
redo1_1=/dev/vx/rdsk/ias_dg/infra_raw_1_log1_64m
redo1_2=/dev/vx/rdsk/ias_dg/infra_raw_1_log2_64m 
redo1_3=/dev/vx/rdsk/ias_dg/infra_raw_1_log3_64m
redo2_1=/dev/vx/rdsk/ias_dg/infra_raw_2_log1_64m
redo2_2=/dev/vx/rdsk/ias_dg/infra_raw_2_log2_64m 
redo2_3=/dev/vx/rdsk/ias_dg/infra_raw_2_log3_64m
..... Create additional lines for "redoN" log files if you have more
..... than 2 instances.
portal1=/dev/vx/rdsk/ias_dg/infra_raw_portal_128m
portal_doc1=/dev/vx/rdsk/ias_dg/infra_raw_portaldoc_64m
portal_idx1=/dev/vx/rdsk/ias_dg/infra_raw_portalidx_64m
portal_log1=/dev/vx/rdsk/ias_dg/infra_raw_portallog_64m
dcm1=/dev/vx/rdsk/ias_dg/infra_raw_dcm_256m
ocats1=/dev/vx/rdsk/ias_dg/infra_raw_ocats_64m
disco_ptm5_cache1=/dev/vx/rdsk/ias_dg/infra_raw_discoptm5cache_64m
disco_ptm5_meta1=/dev/vx/rdsk/ias_dg/infra_raw_discoptm5meta_64m
dsgateway_tab1=/dev/vx/rdsk/ias_dg/infra_raw_dsgatewaytab_64m
wcrsys_ts1=/dev/vx/rdsk/ias_dg/infra_raw_wcrsysts_64m
uddisys_ts1=/dev/vx/rdsk/ias_dg/infra_raw_uddisysts_64m
olts_attrstore1=/dev/vx/rdsk/ias_dg/infra_raw_oltsattrstore_128m
olts_battrstore1=/dev/vx/rdsk/ias_dg/infra_raw_oltsbattrstore_64m
olts_ct_store1=/dev/vx/rdsk/ias_dg/infra_raw_oltsctstore_256m
olts_default1=/dev/vx/rdsk/ias_dg/infra_raw_oltsdefault_128m
olts_svrmgstore1=/dev/vx/rdsk/ias_dg/infra_oltssvrmgstore_64m
ip_dt1=/dev/vx/rdsk/ias_dg/infra_raw_ipdt_128m
ip_rt1=/dev/vx/rdsk/ias_dg/infra_raw_iprt_128m
ip_lob1=/dev/vx/rdsk/ias_dg/infra_raw_iplob_128m
ip_idx1=/dev/vx/rdsk/ias_dg/infra_raw_ipidx_128m
ias_meta1=/dev/vx/rdsk/ias_dg/infra_raw_iasmeta1_256m

9.3.1.11 Set the DBCA_RAW_CONFIG Environment Variable

Set the DBCA_RAW_CONFIG environment variable to point to the text file. For example, if you created the file as /opt/oracle/rawdevices.txt, you can set the variable using one of these commands:

Example (C shell):

% setenv DBCA_RAW_CONFIG /opt/oracle/rawdevices.txt

Example (Bourne or Korn shell):

$ DBCA_RAW_CONFIG=/opt/oracle/rawdevices.txt; export DBCA_RAW_CONFIG

9.3.2 Installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster

In an OracleAS Active Failover Cluster, you install the OracleAS Metadata Repository and the Identity Management components in one installation session by selecting the "Identity Management and OracleAS Metadata Repository" option in the Select Installation Type screen. This option creates a new database for the OracleAS Metadata Repository and a new Oracle Internet Directory.


Note:

In an OracleAS Active Failover Cluster, you must install a new OracleAS Metadata Repository and Oracle Internet Directory. You cannot use an existing database or Oracle Internet Directory.

Follow this procedure to install OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster:

Table 9-5 Steps for Installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster


Screen Action
1. -- Start up the installer. See Section 5.15, "Starting theOracle Universal Installer" for details.
2. Welcome Click Next.
3. Specify Inventory Directory This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the full path for the inventory directory: Enter a full path to a directory where you want the installer to store its files. The installer uses these files to keep track of all Oracle products that are installed on this computer. Enter a directory that is different from the Oracle home directory.

Example: /mnt/oracle/oraInventory

Click OK.

4. UNIX Group Name This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the name of the operating system group to have write permission for the oraInventory directory.

Example: oinstall

Click Next.

5. Run orainstRoot.sh This screen appears only if this is the first installation of any Oracle product on this computer.

Run the orainstRoot.sh script in a different shell as the root user. The script is located in the oraInventory directory.

Run the script on the node where you are running the installer. The installer will prompt you to run the script on other nodes later, in step 8.

Click Continue after you have run the script.

6. Specify File Locations Name: Enter a name to identify this Oracle home.

Example: OH_INFRA_904

Destination Path: Enter the full path to the destination directory. This is the Oracle home. The installer will use this path as the Oracle home for all nodes.

Note: You must enter a new Oracle home name and directory. Do not select an existing Oracle home from the drop down list. If you select an existing Oracle home, the installer will not display the next screen, Specify Hardware Cluster Installation Mode, which is a critical screen.

Example: /mnt/oracle/OraInfra_904

Click Next.

7. Specify Hardware Cluster Installation Mode Note: This is a critical screen when installing the infrastructure in an OracleAS Active Failover Cluster. If you do not see this screen, exit the installer and check that you are running the right version of Oracle UDLM (see Section 9.1.5, "Check That Oracle UNIX Distributed Lock Manager Is Version 3.3.4.5 or Higher").

Select Active Failover Cluster Installation, and select the nodes where you want to install OracleAS Infrastructure 10g.

Click Next.

8. Run orainstRoot.sh Run the orainstRoot.sh script as the root user on the selected nodes. The script is located in the oraInventory directory on the selected nodes.

Click Continue after you have run the script on all the selected nodes.

9. Select a Product to Install Select OracleAS Infrastructure 10g to install an infrastructure.

If you need to install additional languages, click Product Languages. See Section 5.6, "Installing Additional Languages" for details.

Click Next.

10. Select Installation Type Select Identity Management and OracleAS Metadata Repository. Click Next.
11. Preview of Steps for Infrastructure Installation This screen lists the screens that the installer will display. Click Next.
12. Confirm Pre-Installation Requirements Verify that you meet all the listed requirements. Click Next.
13. Select Configuration Options Select all the components except for OracleAS Certificate Authority.

Check that High Availability Addressing is selected. It should be greyed out and selected.

Click Next.

14. Specify Namespace in Internet Directory Select the suggested namespace, or enter a custom namespace for the location of the default Identity Management realm.

Ensure the value shown in Suggested Namespace meets your deployment needs. If not, enter the desired value in Custom Namespace. See Section 6.15, "What Do I Enter in the "Specify Namespace in Internet Directory" Screen?".

Click Next.

15. Specify High Availability Addressing Note: This is a critical screen when installing the infrastructure in an OracleAS Active Failover Cluster. If you do not see this screen, return to the Select Configuration Options screen and ensure that you selected High Availability Addressing.

Enter the fully qualified virtual server name of the load balancer. (Do not enter the physical hostname for the load balancer.) Click Next.

16. Shared Configuration File Name This screen appears if you did not set the SRVM_SHARED_CONFIG environment variable. See Section 9.3.1.8, "(optional) Set the SRVM_SHARED_CONFIG Environment Variable".

Shared Configuration File Name: Enter the path of the raw device or shared file that you created for the SRVM shared configuration device:

Example: /dev/vx/rdsk/rac/srvm256m

Click Next.

17. Database Identification Global Database Name: Enter a name for the OracleAS Metadata Repository database. Append the domain name of your computer to the database name.

Example: asdb.oracle.com

SID Prefix: Enter the system identifier for the OracleAS Metadata Repository database. Typically this is the same as the global database name, but without the domain name. The SID cannot be longer than eight characters.

Example: asdb

Click Next.

18. Set SYS and SYSTEM Passwords Set the passwords for these database users. Click Next.
19. Database Character Set Select Use the default character set. Click Next.
20. Specify Instance Name and ias_admin Password Instance Name: Enter a name for this infrastructure instance. Instance names can contain the $ and _ (underscore) characters in addition to any alphanumeric characters. If you have more than one Oracle Application Server instance on a computer, the instance names must be unique.

Example: infra_904

ias_admin Password and Confirm Password: Enter and confirm the password for the ias_admin user. This is the administrative user for this infrastructure instance.

See Section 5.8, "The ias_admin User and Restrictions on its Password" for password requirements.

Example: welcome99

Click Next.

21. Summary Verify your selections, and click Install.
22. Install Progress This screen shows the progress of the installation.
23. Run root.sh Note: Do not run the root.sh script until prompted.

When prompted, run the root.sh script in a different shell as the root user. The script is located in this instance’s Oracle home directory.

Note: You have to run this script on each node where you are installing OracleAS Infrastructure 10g.

Click OK after you have run the script on all nodes.

24. Configuration Assistants This screen shows the progress of the configuration assistants. Configuration assistants configure components.
25. End of Installation Click Finish to quit the installer.

9.3.3 Post-Installation Procedure

Before you started the installer, you configured the load balancer so that it directed traffic to the node running the installer only. You can now reconfigure the load balancer so that it directs traffic to all nodes in the cluster.

9.3.4 Installing Middle Tiers Against an OracleAS Active Failover Cluster Infrastructure

Pre-installation: Configure the load balancer so that it points to only one node in the OracleAS Active Failover Cluster. The node can be any node in the cluster. After you have installed the middle tiers, you can change the load balancer back so that it points to all nodes in the cluster.

Installation: To install Oracle Application Server middle tiers against an OracleAS Infrastructure 10g running in an OracleAS Active Failover Cluster, follow the procedures as documented in Chapter 7, " InstallingMiddleTiers", but with this difference:

  • In the Register with Oracle Internet Directory screen, enter the load balancer’s virtual server name (not the physical hostname of the load balancer) in the Hostname field. This is the same name that you specified in the Specify High Availability Addressing screen in the OracleAS Infrastructure 10g installation.

9.4 OracleAS Disaster Recovery

Use the OracleAS Disaster Recovery environment when you want to have two physically separate sites in your environment. One site is the production site, and the other site is the standby site. The production site is active, while the standby site is passive; the standby site becomes active when the production site goes down.

Generally, the standby site mirrors the production site: each node in the standby site corresponds to a node in the production site. This includes the nodes running both OracleAS Infrastructure 10g and middle tiers. As a small variation to this environment, you can set up the OracleAS Infrastructure 10g on the production site in a OracleAS Cold Failover Cluster environment. See Section 9.4.1.4, "If You Want to Use OracleAS Cold Failover Cluster on the Production Site" for details.

Figure 9-3 shows an example OracleAS Disaster Recovery environment. Each site has two nodes running middle tiers and a node running OracleAS Infrastructure 10g.


Data Synchronization

For OracleAS Disaster Recovery to work, data between the production and standby sites must be synchronized so that failover can happen very quickly. Configuration changes done at the production site must be synchronized with the standby site.

There are two types of data, and the synchronization method depends on the type of data:

See the Oracle Application Server 10g High Availability Guide for details on how to use Oracle Data Guard and the backup and recovery scripts.

Figure 9-3 OracleAS Disaster Recovery Environment

Description of dr.gif follows
Description of the illustration dr.gif

This section contains the following subsections:

9.4.1 Setting Up the OracleAS Disaster Recovery Environment

Before you can install Oracle Application Server in an OracleAS Disaster Recovery environment, you have to perform these steps:

9.4.1.1 Ensure Nodes Are Identical at the Operating System Level

Ensure that the nodes are identical with respect to the following items:

  • The nodes are running the same version of the operating system.

  • The nodes have the same operating system patches and packages.

  • You can install Oracle Application Server in the same directory path on all nodes.

9.4.1.2 Set Up staticports.ini File

The same component must use the same port number on the production and standby sites. For example, if Oracle HTTP Server is using port 80 on the production site, it must also use port 80 on the standby site. To ensure this is the case, create a staticports.ini file for use during installation. This file enables you to specify port numbers for each component. See Section 4.5.2, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.

9.4.1.3 Set Up Identical Hostnames on Both Production and Standby Sites

The names of the corresponding nodes on the production and standby sites must be identical, so that when you synchronize data between the sites, you do not have to edit the data to fix the hostnames.

9.4.1.3.1 For the Infrastructure Node

For the node running the infrastructure, set up a virtual name. To do this, specify an alias for the node in the /etc/hosts file.

For example, on the infrastructure node on the production site, the following line in /etc/hosts sets the alias to iasinfra:

138.1.2.111   prodinfra   iasinfra

On the standby site, the following line sets the node’s alias to iasinfra.

213.2.2.110   standbyinfra   iasinfra

When you install OracleAS Infrastructure 10g on the production and standby sites, you specify this alias (iasinfra) in the Specify High Availability Addressing screen. The configuration data will then contain this alias for the infrastructure nodes.

9.4.1.3.2 For the Middle Tier Nodes

For the nodes running the middle tiers, you cannot set up aliases like you did for the infrastructure nodes because the installer does not display the Specify High Availability Addressing screen for middle tier installations. When installing middle tiers, the installer determines the hostname automatically by calling the gethostname() function. You want to be sure that for each middle tier node on the production site, the corresponding node on the standby site returns the same hostname.

To do this, set up a local, or internal, hostname, which could be different from the public, or external, hostname. You can change the names of the nodes on the standby site to match the names of the corresponding nodes on the production site, or you can change the names of the nodes on both production and standby sites to be the same. This depends on other applications that you might be running on the nodes, and whether changing the node name will affect those applications.

  1. On the nodes whose local names you want to change, edit the /etc/nodename file to specify the new local fully qualified name. The following example sets the name to iasmid1.oracle.com. (The example in Figure 9-3 uses this name.)

    iasmid1.oracle.com
    
    
  2. Enable the other nodes in the OracleAS Disaster Recovery environment to be able to resolve the node using the new local hostname. You can do this in one of two ways:

    • Method 1: Set up separate internal DNS servers for the production and standby sites. This configuration allows nodes on each site (production or standby) to resolve hostnames within the site. Above the internal DNS servers are the corporate, or external, DNS servers. The internal DNS servers forward non-authoritative requests to the external DNS servers. The external DNS servers do not know about the existence of the internal DNS servers. See Figure 9-4.

      To use this method, go to step 3.

      Figure 9-4 Method 1: Using DNS Servers

      Description of dr_name_res.gif follows
      Description of the illustration dr_name_res.gif

    • Method 2: Edit the /etc/hosts file on each node on both sites. This method does not involve configuring DNS servers, but you have to maintain the /etc/hosts file on each node in the OracleAS Disaster Recovery environment. For example, if an IP address changes, you have to update the files on all the nodes, and reboot the nodes.

      To use this method, go to step 4.

  3. If you are using the separate internal DNS server method (method 1), set up your DNS files as follows:

    1. Make sure the external DNS names are defined in the external DNS zone. Example:

      prodmid1.us.oracle.com     IN  A  138.1.2.333
      prodmid2.us.oracle.com     IN  A  138.1.2.444
      prodinf.us.oracle.com      IN  A  138.1.2.111
      standbymid1.us.oracle.com  IN  A  213.2.2.330
      standbymid2.us.oracle.com  IN  A  213.2.2.331
      standbyinf.us.oracle.com   IN  A  213.2.2.110
      
      
    2. At the production site, create a new zone at the production site using a domain name different from your external domain name. To do this, populate the zone data files with entries for each node in the OracleAS Disaster Recovery environment.

      For the infrastructure node, use the virtual name or alias.

      For the middle tier nodes, use the node name (the value in /etc/nodename).

      The following example uses "iasha" as the domain name for the new zone.

      iasmid1.iasha    IN  A  138.1.2.333
      iasmid2.iasha    IN  A  138.1.2.444
      iasinfra.iasha   IN  A  138.1.2.111
      
      

      Do the same for the standby site. Use the same domain name that you used for the production site.

      iasmid1.iasha    IN  A  213.2.2.330
      iasmid1.iasha    IN  A  213.2.2.331
      iasinfra.iasha   IN  A  213.2.2.110
      
      
    3. Configure the DNS resolver to point to the internal DNS servers instead of the external DNS server.

      In the /etc/resolv.conf file for each node on the production site, replace the existing name server IP address with the IP address of the internal DNS server for the production site.

      Do the same for the nodes on the standby site, but use the IP address of the internal DNS server for the standby site.

    4. Create a separate entry for Oracle Data Guard in the internal DNS servers. This entry is used by Oracle Data Guard to ship redo data to the database on the standby site.

      In the next example, the "remote_infra" entry points to the infrastructure node on the standby site. This name is used by the TNS entries on both the production and standby sites so that if a switchover occurs, the entry does not have to be changed.

      Figure 9-5 Entry for Oracle Data Guard in the Internal DNS Servers

      Description of dr_dg.gif follows
      Description of the illustration dr_dg.gif

      On the production site, the DNS entries look like this:

      iasmid1.iasha       IN  A  138.1.2.333
      iasmid2.iasha       IN  A  138.1.2.444
      iasinfra.iasha      IN  A  138.1.2.111
      remote_infra.iasha  IN  A  213.2.2.110
      
      

      On the standby site, the DNS entries look like this:

      iasmid1.iasha       IN  A  213.2.2.330
      iasmid2.iasha       IN  A  213.2.2.331
      iasinfra.iasha      IN  A  213.2.2.110
      remote_infra.iasha  IN  A  138.1.2.111
      
      
  4. If you are using the /etc/hosts method for name resolution (method 2), perform these steps:

    1. On each node on the production site, include these lines in the /etc/hosts file. The IP addresses resolve to nodes on the production site.


      Note:

      In the /etc/hosts file, be sure that the line that identifies the current node comes immediately after the localhost definition line (the line with the 127.0.0.1 address).

      127.0.0.1    localhost
      138.1.2.333  iasmid1.oracle.com   iasmid1
      138.1.2.444  iasmid2.oracle.com   iasmid2
      138.1.2.111  iasinfra.oracle.com  iasinfra
      
      
    2. On each node on the standby site, include these lines in the /etc/hosts file. The IP addresses resolve to nodes on the standby site.


      Note:

      In the /etc/hosts file, be sure that the line that identifies the current node comes immediately after the loopback definition line (the line with the 127.0.0.1 address).

      127.0.0.1    localhost
      213.2.2.330  iasmid1.oracle.com   iasmid1
      213.2.2.331  iasmid2.oracle.com   iasmid2
      213.2.2.110  iasinfra.oracle.com  iasinfra
      
      
    3. Ensure that the "hosts:" line in the /etc/nsswitch.conf file has "files" as the first item:

      hosts:   files nis dns
      
      

      The entry specifies the ordering of the name resolution. If another method is listed first, then the node will use the other method to resolve the hostname.


    Note:

    Reboot the nodes after editing these files.

After making the changes and rebooting the nodes, check that the hostnames are working properly by running the following commands:

  • On the middle tier nodes on both sites, run the hostname command. This should return the internal hostname. For example, the command should return "iasmid1" if you run it on prodmid1 and standbymid1.

    prompt> hostname
    iasmid1
    
    
  • On each node, ping the other nodes in the environment using the internal hostname as well as the external hostname. The command should be successful. For example, from the first midtier node, prodmid1, you can run the following commands (the -s option displays the IP address of the node):

    prompt> ping -s prodinfra       ping the production infrastructure node
    PING prodinfra: 56 data byes
    64 bytes from prodinfra.oracle.com (138.1.2.111): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping -s iasinfra        ping the production infrastructure node
    PING iasinfra: 56 data byes
    64 bytes from iasinfra.oracle.com (138.1.2.111): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping -s iasmid2         ping the second production midtier node
    PING iasmid2: 56 data byes
    64 bytes from iasmid2.oracle.com (138.1.2.444): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping -s prodmid2        ping the second production midtier node
    PING prodmid2: 56 data byes
    64 bytes from prodmid2.oracle.com (138.1.2.444): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping -s standbymid1       ping the first standby midtier node
    PING standbymid1: 56 data byes
    64 bytes from standbymid1.oracle.com (213.2.2.330): icmp_seq=0. time=0. ms
    ^C
    
    

9.4.1.4 If You Want to Use OracleAS Cold Failover Cluster on the Production Site

On the production site of a OracleAS Disaster Recovery system, you can set up the OracleAS Infrastructure 10g to run in a OracleAS Cold Failover Cluster configuration. In this case, you have two nodes in a hardware cluster, and you install the OracleAS Infrastructure 10g on a shared disk. See Section 9.2, "OracleAS Cold Failover Cluster" for details.

Figure 9-6 Infrastructure in an OracleAS Cold Failover Cluster Configuration

Description of dr_cfc.gif follows
Description of the illustration dr_cfc.gif

To set up OracleAS Cold Failover Cluster in this environment, use the virtual IP address (instead of the physical IP address) for iasinfra.iasha on the production site. The following example assumes 138.1.2.120 is the virtual IP address.

iasmid1.iasha          IN  A  138.1.2.333
iasmid2.iasha          IN  A  138.1.2.444
iasinfra.iasha         IN  A  138.1.2.120         this is a virtual IP address
remote_infra.iasha     IN  A  213.2.2.110

On the standby site, you still use the physical IP address for iasinfra.iasha, but the remote_infra.iasha uses the virtual IP address.

iasmid1.iasha          IN  A  213.2.2.330
iasmid2.iasha          IN  A  213.2.2.331
iasinfra.iasha         IN  A  213.2.2.110         physical IP address
remote_infra.iasha     IN  A  138.1.2.120         virtual IP address

9.4.2 Installing Oracle Application Server in an OracleAS Disaster Recovery Environment

Install Oracle Application Server as follows:


Note:

For all of the installations, be sure to use staticports.ini to specify port numbers for the components. See Section 9.4.1.2, "Set Up staticports.ini File". In addition, be sure to specify the correct option name for each installation type (see Table 4-5).

  1. Install OracleAS Infrastructure 10g on the production site.

  2. Install OracleAS Infrastructure 10g on the standby site.

  3. Install the middle tiers on the production site.

  4. Install the middle tiers on the standby site.

9.4.2.1 Installing the OracleAS Infrastructure 10g

As with OracleAS Cold Failover Cluster and OracleAS Active Failover Cluster, you must install the Identity Management and the OracleAS Metadata Repository components of OracleAS Infrastructure 10g on the same node. You cannot distribute the components over multiple nodes.

The installation steps are similar to that for OracleAS Cold Failover Cluster. See Section 9.2.2, "Installing OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster" for the screen sequence. Note the following points:

  • It is OK if the Select Hardware Cluster Option screen does not appear. See Table 9-2, step 7.

  • Be sure you select High Availability Addressing in the Select Configuration Options screen. See Table 9-2, step 12.

  • In the Specify High Availability Addressing screen, enter an alias as the virtual address (for example, iasinfra.oracle.com). See Table 9-2, step 14.

9.4.2.2 Installing Middle Tiers

You can install any type of middle tier that you like:

For installing J2EE and Web Cache, see Section 7.9, "Installing J2EE and Web Cache with OracleAS Database-Based Cluster and Identity Management Access".

For installing Portal and Wireless or Business Intelligence and Forms, see Section 7.13, "Installing Portal and Wireless or Business Intelligence and Forms".

Note the following points:

  • When the installer prompts you to register with Oracle Internet Directory, and asks you for the Oracle Internet Directory hostname, enter the alias of the node running OracleAS Infrastructure 10g (for example, iasinfra.oracle.com).

9.4.3 What to Read Next

For information on how to manage your OracleAS Disaster Recovery environment, such as setting up Oracle Data Guard and configuring the OracleAS Metadata Repository database, see the Oracle Application Server 10g High Availability Guide.