JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 5/11 Release Notes     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

Oracle Solaris Cluster 3.3 5/11 Release Notes

What's New in the Oracle Solaris Cluster 3.3 5/11 Software

Enhancements to the cluster check Command as a Cluster Validation Tool

Fencing Support for Sun ZFS Storage Appliance as a NAS Device

Support for Oracle ACFS as a Cluster File System

Zone Cluster Support for Loopback Mounts With HAStoragePlus Using ZFS

Configuration Wizard Support for Oracle 11g Release 2 with HA-Oracle and Oracle RAC

Support for Zone Clusters Without IP Addresses

Support for SWIFTAlliance Access 7.0 and SWIFTAlliance Gateway 7.0

Restrictions

Oracle ACFS as a Cluster File System

Veritas Volume Manager Cluster Feature No Longer Supported

Commands Modified in This Release

Product Name Changes

Compatibility Issues

Node Panic When Calling rename(2) to Rename an Oracle ACFS Directory to its Parent Directory (11828617)

Node Fails To Start Oracle Clusterware After a Panic (uadmin 5 1) Fault Injection (11828322)

Need Support for Clusterized fcntl by Oracle ACFS (11814449)

Unable to Start Oracle ACFS in Presence of Oracle ASM in a Non-Global Zone (11707611)

Oracle Solaris Cluster Project Shares Workflow Should Return All Shares Underneath r/w Projects (7041969)

SAP startsap Fails to Start the Application Instance if startsrv Is Not Running (7028069)

Problem Using Sun ZFS Storage Appliance as Quorum Device Through Fibre Channel or iSCSI (6966970)

Cluster Zone Won't Boot Up After Live Upgrade on ZFS Root (6955669)

Solaris Volume Manager GUI

Accessibility Information

Supported Products

Data Services

File Systems

Oracle Solaris 10 on SPARC

Oracle Solaris 10 on x86

Memory Requirements

Oracle Solaris Operating System

Oracle VM Server for SPARC

Sun Management Center

Sun StorageTek Availability Suite

Volume Managers

Oracle Solaris 10 on SPARC

Oracle Solaris 10 on x86

Product Localization

Known Issues and Bugs

Administration

Resource Group Does Not Fail Over When Failover_mode Is Set to SOFT During a Public Interface Failure (7038727)

Unable to Register Resource Type SUNW.scalable_acfs_proxy in a Zone Cluster (7023590)

Oracle's SPARC T3-4 Fails During Reboot (6993321)

Removing the Last Node That Hosts a Zone Cluster Does Not Remove the Zone Cluster From the Cluster Configuration (6969605)

Missing /dev/rmt Causes Incorrect Reservation Usage When Policy Is pathcount (6920996)

The global_fencing Property Code is Broken When the Value is Changed to prefer3 (6879360)

Autodiscovery Does Not Work on LDoms With Hybrid I/O (6870171)

EMC SRDF and Hitachi TrueCopy Reject Switchover When Replicated Device-Group Status Will Cause Switchover and Switchback to Fail (6798901)

Configuring a Scalable Resource With the LB_STICKY_WILD Load Balancing Policy Fails With clsetup (6773401)

Removing Nodes from the Cluster Configuration Can Result in Node Panics (6735924)

If the Solaris Security Toolkit Is Configured on Cluster Nodes, scstat -i Gives an RPC Bind Failure Error(6727594)

More Validation Checks Needed When Combining DIDs (6605101)

Solaris Cluster Manager Fails to Come Up in a 16-Node Cluster (6594485)

Data Services

Resource Group Creation After Zone Cluster Reboot but Before RGM Reconfiguration Leads to Inconsistent State Within RGM (7041222)

Apache Tomcat Does Not Start Due to Missing Script (7022690)

SAP Web Application Server Primary Instance Not Able to Come Online on the Same Node After Killing the Dispatcher (7018400)

HAStoragePlus Resource Configured in Scalable Resource Group with Cluster File System Stays at "Starting" State Indefinitely (6960386)

Gateway Probe Will Ping Pong if Database Listener Is Not Reachable (6927071)

Scalable Applications Are Not Isolated Between Zone Clusters (6911363)

Running clnas add or clnas remove Command on Multiple Nodes at the Same Time Could Cause Problem (6791618)

clresourcegroup add-node Triggers an HAStoragePlus Resource to Become Faulted State (6547896)

Developer Environment

GDS Returns Incorrect Exit Status in STOP Method for Non-PMF Services (6831988)

Installation

The installer Deletes the Existing Package Corresponding to Ops Center Agent JavaDB Database. (6956479)

Localization

Result of System Requirements Checking Is Wrong (6495984)

Runtime

cldevicegroup Status Always Shows Multi-Owner Solaris Volume Manager Disk Sets Configured on vucmm Framework as "offline" (6962196)

ssm_start Fails Due to Unrelated IPMP Down (6938555)

Upgrade

Zones With ip-type=exclusive Cannot Host SUNW.LogicalHostname Resources After Upgrade (6702621)

Patches and Required Firmware Levels

Applying an Oracle Solaris Cluster 3.3 5/11 Core Patch

How to Apply the Oracle Solaris Cluster 3.3 5/11 Core Patch

Removing an Oracle Solaris Cluster 3.3 5/11 Core Patch

How to Remove an Oracle Solaris Cluster 3.3 5/11 Core Patch

Patch Management Tools

Patch for Cluster Support for Sun StorageTek 2530 Array

My Oracle Support

Oracle Solaris Cluster 3.3 5/11 Documentation Set

Documentation Addendum

Man Pages

clnasdevice(1CL)

clzonecluster(1CL)

SUNW.oracle_server(5)

SUNW.scalable_acfs_proxy(5)

A.  Documentation Appendix

Compatibility Issues

This section contains the following information about Oracle Solaris Cluster compatibility issues with other products:

See also the following information:

Node Panic When Calling rename(2) to Rename an Oracle ACFS Directory to its Parent Directory (11828617)

Problem Summary: This problem occurs when calling rename(2) to rename a subdirectory in an Oracle ACFS file system to its parent directory, where the parent directory is a subdirectory under the Oracle ACFS file-system mount point. An example would be an Oracle ACFS file system mounted at /xxx, with a directory called /xxx/dir1 and a child directory called /xxx/dir1/dir2. Calling rename(2) with /xxx/dir1/dir2 and /xxx/dir1 as the arguments produces the error.

Workaround: None. Do not rename an Oracle ACFS directory as the name of its parent directory.

Node Fails To Start Oracle Clusterware After a Panic (uadmin 5 1) Fault Injection (11828322)

Problem Summary: This problem occurs on a two-node Oracle Solaris Cluster configuration running single instance Oracle Database on clustered Oracle ASM with DB_HOME on Oracle ACFS. After a panic fault on one of the nodes, the node boots up but CRS start fails.

# crsctl check crs
CRS-4638: Oracle High Availability Services is online 
CRS-4535: Cannot communicate with Cluster Ready Services 
CRS-4529: Cluster Synchronization Services is online 
CRS-4533: Event Manager is online 
# crsctl start crs
CRS-4640: Oracle High Availability Services is already active 
CRS-4000: Command Start failed, or completed with errors.

Workaround: Reboot the node a second time.

Need Support for Clusterized fcntl by Oracle ACFS (11814449)

Problem Summary: Oracle ACFS in Oracle 11g release 2 Grid Infrastructure provides node-local fcntl only. In an Oracle Solaris Cluster configuration, applications that are configured as scalable applications might be active from more than one node of the cluster. A scalable application might issue write requests to the underlying file system from multiple nodes at the same time. Depending on the implementation of the application, those with dependency on clusterized fcntl() cannot be configured as scalable resources. To support scalable applications on Oracle ACFS in an Oracle Solaris Cluster configuration, Oracle ACFS must support clusterized fcntl.

Workaround: There is no workaround at this time. Do not configure scalable applications on Oracle ACFS in an Oracle Solaris Cluster configuration.

Unable to Start Oracle ACFS in Presence of Oracle ASM in a Non-Global Zone (11707611)

Problem Summary: This problem occurs when a configuration with Oracle 11g release 2 Grid Infrastructure runs in the global zone and Oracle 10g release 2 ASM runs in a non-global zone. A general-purpose Oracle ACFS file system is created in the global zone with mountpath set to a path under the zone root path of the non-global zone. The Oracle ASM admin user in the global zone is different from the Oracle ASM user in the non-global zone. The user ID of the Oracle ASM admin user in the non-global zone does not exist in the global zone.

After reboot of the global—cluster node, the attempt to start the Oracle ACFS file system fails with messages similar to the following:

phys-schost# /u01/app/11.2.0/grid/bin/srvctl start filesystem -d /dev/asm/dummy-27 -n phys-schost 
PRCR-1013 : Failed to start resource ora.dbhome.dummy.acfs 
PRCR-1064 : Failed to start resource ora.dbhome.dummy.acfs on node phys-schost 
CRS-5016: Process "/u01/app/11.2.0/grid/bin/acfssinglefsmount" spawned by agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/phys-schost/agent/crsd/orarootagent_root/orarootagent_ro ot.log" 
CRS-2674: Start of 'ora.dbhome.dummy.acfs' on 'phys-schost' failed 

The orarootagent_root.log file has messages similar to the following:

2011-02-01 16:15:53.417: [ora.dbhome.dummy.acfs][8] {2:53487:190} [start] (:CLSN00010:)su: Unknown id: 303

The user ID 303 that is identified as Unknown is the ID for the Oracle ASM admin user in the non-global zone.

Workaround: Use the same user ID for the Oracle ASM admin user in both the global zone and the non-global zone.

Oracle Solaris Cluster Project Shares Workflow Should Return All Shares Underneath r/w Projects (7041969)

Problem Summary: Configuring a ScalMountPoint resource for a Sun ZFS Appliance file system fails if the file system is not set to inherit its NFS properties from its parent project.

Ensure that Inherit from project is selected for the file system when you set up the ScalMountPoint resource. To check this setting, edit the file system in the ZFS Appliance GUI and navigate to the Protocols tab.

After you configure the ScalMountPoint resource, you can optionally deselect Inherit from project to turn fencing off.

SAP startsap Fails to Start the Application Instance if startsrv Is Not Running (7028069)

Problem Summary: In SAP 7.11, the startsap program fails to start the application instance if the startsrv program is not running.

Workaround: Use the following entries in the wrapper script to start the application instance, adapting them to your system information, such as instance number, SID, and so forth.

ps -e -o args|grep sapstartsrv|grep DVEB
if (( ${?} != 0 ))
then
         /usr/sap/FIT/DVEBMGS03/exe/sapstartsrv pf=/usr/sap/FIT/SYS/profile/FIT_DVEBMGS03_lzkosi2c -D
fi

Problem Using Sun ZFS Storage Appliance as Quorum Device Through Fibre Channel or iSCSI (6966970)

Problem Summary: When Oracle's Sun ZFS Storage Appliance (formerly Sun Storage 7000 Unified Storage Systems) over Fibre Channel or iSCSI is used as a quorum device with fencing enabled, Oracle Solaris Cluster uses it as a SCSI quorum device. In such a configuration, certain SCSI actions requested by the Oracle Solaris Cluster software might not be addressed in a correct manner. In addition, the cluster reconfiguration's default timeout of 25 seconds for the completion of quorum operations might not be adequate for such a quorum configuration.

If you see messages on the cluster nodes saying that such a Sun ZFS Storage Appliance quorum device is unreachable, or if you see failures of cluster nodes with the message CMM: Unable to acquire the quorum device, there might be a problem with the quorum device or the path to it.

Workaround: Check that both the quorum device and the path to it are functional. If the problem persists, apply Sun ZFS Storage Appliance Firmware release 2010Q3.3 to correct the problem.

If there is a reason to not install this firmware, or you need an interim mitigation of the issue, use one of the following alternatives:

Cluster Zone Won't Boot Up After Live Upgrade on ZFS Root (6955669)

For a global cluster that uses ZFS for the root file system and which has zone clusters configured, when using Live Upgrade to upgrade to Solaris 10 8/10, the upgraded boot environment does not boot.

Contact your Oracle support representative to learn whether a patch or workaround is available.

Solaris Volume Manager GUI

The Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) is not compatible with Oracle Solaris Cluster software. Use the command-line interface or Oracle Solaris Cluster utilities to configure Solaris Volume Manager software.