C H A P T E R 3 |
Server Diagnostics |
This chapter describes the diagnostics that are available for monitoring and troubleshooting the server. This chapter does not provide detailed troubleshooting procedures, but instead describes the server diagnostics facilities and how to use them.
This chapter is intended for technicians, service personnel, and system administrators who service and repair computer systems.
The following topics are covered:
There are a variety of diagnostic tools, commands, and indicators you can use to troubleshoot a server.
The LEDs, ALOM CMT, Solaris OS PSH, and many of the log files and console messages are integrated. For example, a fault detected by the Solaris PSH software displays the fault, logs it, passes information to ALOM CMT where it is logged, and depending on the fault, might illuminate of one or more LEDs.
The diagnostic flow chart in FIGURE 3-1 and TABLE 3-1 describes an approach for using the server diagnostics to identify a faulty field-replaceable unit (FRU). The diagnostics you use, and the order in which you use them, depend on the nature of the problem you are troubleshooting, so you might perform some actions and not others.
The flow chart assumes that you have already performed some troubleshooting such as verification of proper installation and visual inspection of cables and power, and possibly performed a reset of the server (refer to the Sun Fire T1000 Server Installation Guide and Sun Fire T1000 Server Administration Guide for details).
FIGURE 3-1 is a flow chart of the diagnostics available to troubleshoot faulty hardware. TABLE 3-1 has more information about each diagnostic in this chapter.
FIGURE 3-1 Diagnostic Flowchart
The Power OK LED is located on the front and rear of the chassis. The AC OK LED is located on the rear of the server on each power supply. If these LEDs are not on, check the power source and power connections to the server. |
|||
The showfaults command displays the following kinds of faults: Faulty FRUs are identified in fault messages using the FRU name. For a list of FRU names, see Appendix A. |
|||
The Solaris message buffer and log files record system events and provide information about faults.
|
Section 3.6, Collecting Information From Solaris OS Files and Commands |
||
SunVTS is an application you can run to exercise and diagnose FRUs. To run SunVTS, the server must be running the Solaris OS.
|
|||
POST performs basic tests of the server components and reports faulty FRUs. Note - diag_level=min is the default ALOM CMT setting, which tests devices required to boot the server. Use diag_level=max for troubleshooting and hardware replacement.
|
|||
If the fault listed by the showfaults command displays a temperature or voltage fault, then the fault is an environmental fault. Environmental faults can be caused by faulty FRUs (power supply or fan tray) or by environmental conditions such as when computer room ambient temperature is too high, or the server airflow is blocked. When the environmental condition is corrected, the fault will automatically clear. You can also use the fault LEDs on the server to identify the faulty FRU (fan tray or power supply). |
Section 3.3.2, Running the showfaults Command |
||
If the fault message displays the following text, the fault was detected by the Solaris Predictive Self-Healing software:
If the fault is a PSH detected fault, identify the faulty FRU from the fault message and replace the faulty FRU. After the FRU is replaced, perform the procedure to clear PSH detected faults. |
Section 3.5, Using the Solaris Predictive Self-Healing Feature |
||
POST performs basic tests of the server components and reports faulty FRUs. When POST detects a faulty FRU, it logs the fault and if possible, takes the FRU offline. POST detected FRUs display the following text in the fault message: FRU_name deemed faulty and disabled In this case, replace the FRU and run the procedure to clear POST detected faults. |
|||
The majority of hardware faults are detected by the server's diagnostics. In rare cases a problem might require additional troubleshooting. If you are unable to determine the cause of the problem, contact technical support. |
A variety of features play a role in how the memory subsystem is configured and how memory faults are handled. Understanding the underlying features helps you identify and repair memory problems. This section describes how the memory is configured and how the server deals with memory faults.
In the server memory, there are eight slots that hold DDR-2 memory DIMMs in the following DIMM sizes:
All DIMMS installed must be the same size, and DIMMs must be added four at a time. In addition, Rank 0 memory must be fully populated for the server to function.
See Section 5.5.2, Installing DIMMs, for instructions about adding memory to the server.
The server uses advanced ECC technology, also called chipkill, that corrects up to 4-bits in error on nibble boundaries, as long as the bits are all in the same DRAM. If a DRAM fails, the DIMM continues to function.
The following server features independently manage memory faults:
When a memory fault is detected, POST displays the fault with the device name of the faulty DIMMS, logs the fault, and disables the faulty DIMMs by placing them in the ASR blacklist. For a given memory fault, POST disables half of the physical memory in the system. When this offlining process occurs in normal operation, you must replace the faulty DIMMs based on the fault message and enable the disabled DIMMs with the ALOM CMT enablecomponent command.
In other than normal operation, POST can be configured to run various levels of testing (see TABLE 3-5 and TABLE 3-6) and can thoroughly test the memory subsystem based on the purpose of the test. However, with thorough testing enabled (diag_level=max), POST finds faults and offlines memory devices with errors that could be correctable with PSH. Thus, not all memory devices detected and offlined by POST need to be replaced. See Section 3.4.5, Correctable Errors Detected by POST.
If you suspect that the server has a memory problem, follow the flow chart (see TABLE 3-1). Run the ALOM CMT showfaults command. The showfaults command lists memory faults and lists the specific DIMMS that are associated with the fault. Once you identify which DIMMs to replace, see Chapter 5 for DIMM removal and replacement instructions. It is important that you perform the instructions in that chapter to clear the faults and enable the replaced DIMMs.
The server provides the following groups of LEDs:
These LEDs provide a quick visual check of the state of the system.
FIGURE 3-2 LEDs on the Server Front Panel
FIGURE 3-3 LEDs on the Server Rear Panel
Two LEDs and one LED/button are located in the upper left corner of the front panel (TABLE 3-2). The LEDs are also provided on the rear panel.
The power supply LEDs (TABLE 3-3) are located on the back of the power supply.
The Sun Advanced Lights Out Management (ALOM) CMT is a system controller in the server that enables you to remotely manage and administer your server.
ALOM CMT enables you to remotely run diagnostics, such as power-on self-test (POST), that would otherwise require physical proximity to the server's serial port. You can also configure ALOM CMT to send email alerts of hardware failures, hardware warnings, and other events related to the server or to ALOM CMT.
The ALOM CMT circuitry runs independently of the server, using the server's standby power. Therefore, ALOM CMT firmware and software continue to function when the server operating system goes offline or when the server is powered off.
Faults detected by ALOM CMT, POST, and the Solaris Predictive Self-Healing (PSH) technology are forwarded to ALOM CMT for fault handling (FIGURE 3-4).
In the event of a system fault, ALOM CMT ensures that the Service Required LED is lit, FRU ID PROMs are updated, the fault is logged, and alerts are displayed. Faulty FRUs are identified in fault messages using the FRU name. For a list of FRU names, see Appendix A.
FIGURE 3-4 ALOM CMT Fault Management
ALOM CMT sends alerts to all ALOM CMT users that are logged in, sending the alert through email to a configured email address, and writing the event to the ALOM CMT event log.
ALOM CMT can detect when a fault is no longer present and clears the fault in several ways:
ALOM CMT can detect the removal of a FRU, in many cases even if the FRU is removed while ALOM CMT is powered off. This enables ALOM CMT to know that a fault, diagnosed to a specific FRU, has been repaired. The ALOM CMT clearfault command enables you to manually clear certain types of faults without a FRU replacement or if ALOM CMT was unable to automatically detect the FRU replacement.
Many environmental faults can automatically recover. A temperature that is exceeding a threshold might return to normal limits. An unplugged power supply can be plugged in, and so on. Recovery of environmental faults is automatically detected. Recovery events are reported using one of two forms:
Environmental faults can be repaired through the removal of the faulty FRU. FRU removal is automatically detected by the environmental monitoring and all faults associated with the removed FRU are cleared. The message for that case, and the alert sent for all FRU removals is:
fru at location has been removed.
There is no ALOM CMT command to manually repair an environmental fault.
The Solaris Predictive Self-Healing technology does not monitor the hard drive for faults. As a result, ALOM CMT does not recognize hard drive faults, and will not light the fault LEDs on either the chassis or the hard drive itself. Use the Solaris message files to view hard drive faults. See Section 3.6, Collecting Information From Solaris OS Files and Commands.
This section describes the ALOM CMT commands that are commonly used for service-related activities.
Before you can run ALOM CMT commands, you must connect to the ALOM. There are several ways to connect to the system controller:
TABLE 3-4 describes the typical ALOM CMT commands for servicing the server. For descriptions of all ALOM CMT commands, issue the help command or refer to the Advanced Lights Out Management (ALOM) CMT Guide.
Displays a list of all ALOM CMT commands with syntax and descriptions. Specifying a command name as an option displays help for that command. |
|
Takes the host server from the OS to either kmdb or OpenBoot |
|
Manually clears host-detected faults. The UUID is the unique fault ID of the fault to be cleared. |
|
Connects you to the host system. The -f option forces the console to have read and write capabilities. |
|
Displays the contents of the system's console buffer. The following options enable you to specify how the output is displayed: |
|
Enables control of the firmware during system initialization with the following options: |
|
Performs a poweroff followed by poweron. The -f option forces an immediate poweroff, otherwise the command attempts a graceful shutdown. |
|
Powers off the host server. The -y option enables you to skip the confirmation question. The -f option forces an immediate shutdown. |
|
Powers on the host server. Using the -c option executes a console command after completion of the poweron command. |
|
Generates a hardware reset on the host server. The -y option enables you to skip the confirmation question. The -c option executes a console command after completion of the reset command. |
|
Reboots the system controller. The -y option enables you to skip the confirmation question. |
|
Sets the virtual keyswitch. The -y option enables you to skip the confirmation question when setting the keyswitch to stby. |
|
Displays the environmental status of the host server. This information includes system temperatures, power supply, front panel LED, hard drive, fan, voltage, and current sensor status. See Section 3.3.3, Running the showenvironment Command. |
|
Displays current system faults. See Section 3.3.2, Running the showfaults Command. |
|
Displays information about the FRUs in the server.
FRUs, unless one is specified). See Section 3.3.4, Running the showfru Command. |
|
Displays the current state of the Locator LED as either on or off. |
|
showlogs [-b lines | -e lines |-v] [-g lines] [-p logtype[r|p]]] |
Displays the history of all events logged in the ALOM CMT event buffers (in RAM or the persistent buffers). |
Displays information about the host system's hardware configuration, the system serial number, and whether the hardware is providing service. |
The ALOM CMT showfaults command displays the following kinds of faults:
Use the showfaults command for the following reasons:
At the sc> prompt, type the showfaults command.
The following showfaults command examples show the different kinds of output from the showfaults command:
The showenvironment command displays a snapshot of the server's environmental status. This command displays system temperatures, hard disk drive status, power supply and fan status, front panel LED status, voltage and current sensors. The output uses a format similar to the Solaris OS command prtdiag (1m).
At the sc> prompt, type the showenvironment command.
The output differs according to your system's model and configuration. Example:
The showfru command displays information about the FRUs in the server. Use this command to see information about an individual FRU, or for all the FRUs.
At the sc> prompt, enter the showfru command.
Power-on self-test (POST) is a group of PROM-based tests that run when the server is powered on or reset. POST checks the basic integrity of the critical hardware components in the server (CPU, memory, and I/O buses).
If POST detects a faulty component, the component is disabled automatically, preventing faulty hardware from potentially harming any software. If the system is capable of running without the disabled component, the system will boot when POST is complete. For example, if one of the processor cores is deemed faulty by POST, the core will be disabled, and the system will boot and run using the remaining cores.
In normal operation*, the default configuration of POST (diag_level=min), provides a sanity check to ensure the server will boot. Normal operation applies to any power on of the server not intended to test power-on errors, hardware upgrades, or repairs. Once the Solaris OS is running, PSH provides run time diagnosis of faults.
For validating hardware upgrades or repairs, configure POST to run in maximum mode (diag_level=max). Note that with maximum testing enabled, POST detects and offlines memory devices with errors that could be correctable by PSH. Thus, not all memory devices detected by POST need to be replaced. See Section 3.4.5, Correctable Errors Detected by POST.
The server can be configured for normal, extensive, or no POST execution. You can also control the level of tests that run, the amount of POST output that is displayed, and which reset events trigger POST by using ALOM CMT variables.
TABLE 3-5 lists the ALOM CMT variables used to configure POST and FIGURE 3-5 shows how the variables work together.
The system can power on and run POST (based on the other parameter settings). For details see TABLE 3-6. This parameter overrides all other commands. |
||
The system can power on and run POST, but no flash updates can be made. |
||
Runs POST with preset values for diag_level and diag_verbosity. |
||
If diag_mode = normal, runs all the minimum tests plus extensive CPU and memory tests. |
||
Only runs POST for the first power on. This option is the default. |
||
POST output displays functional tests with a banner and pinwheel. |
||
POST displays all test, informational, and some debugging messages. |
FIGURE 3-5 Flowchart of ALOM CMT Variables for POST Configuration
TABLE 3-6 shows combinations of ALOM CMT variables and associated POST modes.
setkeyswitch[1] |
||||
diag_level[2] |
||||
This is the default POST configuration. This configuration tests the system thoroughly, and suppresses some of the detailed POST output. |
POST does not run, resulting in quick system initialization, but this is not a suggested configuration. |
POST runs the full spectrum of tests with the maximum output displayed. |
POST runs the full spectrum of tests with the maximum output displayed. |
1. Access the ALOM CMT sc> prompt:
At the console, issue the #. key sequence:
2. Use the ALOM CMT sc> prompt to change the POST parameters.
Refer to TABLE 3-5 for a list of ALOM CMT POST parameters and their values.
The setkeyswitch parameter sets the virtual keyswitch, so it does not use the setsc command. For example, to change the POST parameters using the setkeyswitch command, enter the following:
To change the POST parameters using the setsc command, you must first set the setkeyswitch parameter to normal, then you can change the POST parameters using the setsc command:
You can use POST for basic hardware verification and diagnosis, and for troubleshooting as described in the following sections.
POST tests critical hardware components to verify functionality before the system boots and accesses software. If POST detects an error, the faulty component is disabled automatically, preventing faulty hardware from potentially harming software.
In normal operation (diag_level=min), POST runs in mimimum mode by default to test devices required to power on the server. Replace any devices POST detects as faulty in minimum mode.
Run POST in maximum mode (diag_level=max) for all power-on or error-generated resets, and to validate hardware upgrades or repairs. With maximum testing enabled, POST finds faults and offlines memory devices with errors that could be correctable by PSH. Check the POST-generated errors with the showfaults -v command to verify if memory devices detected by POST can be corrected by PSH or need to be replaced. See Section 3.4.5, Correctable Errors Detected by POST.
You can use POST as an initial diagnostic tool for the system hardware. In this case, configure POST to run in maximum mode (diag_mode=service, setkeyswitch=diag, diag_level=max) for thorough test coverage and verbose output.
This procedure describes how to run POST when you want maximum testing, as in the case when you are troubleshooting a server or verifying a hardware upgrade or repair.
1. Switch from the system console prompt to the sc> prompt by issuing the #. escape sequence.
2. Set the virtual keyswitch to diag so that POST will run in Service mode.
3. Reset the system so that POST runs.
There are several ways to initiate a reset. The following example uses the powercycle command. For other methods, refer to the Sun Fire T1000 Server Administration Guide.
4. Switch to the system console to view the POST output:
5. Perform further investigation if needed.
a. Interpret the POST messages:
POST error messages use the following syntax:
c:s > ERROR: TEST = failing-test
c:s > H/W under test = FRU
c:s > Repair Instructions: Replace items in order listed by H/W under test above
c:s > MSG = test-error-message
c:s > END_ERROR
In this syntax, c = the core number and s = the strand number.
Warning and informational messages use the following syntax:
The following example shows a POST error message.
In this example, POST is reporting a memory error at DIMM location MB/CMP0/CH0/R1/D0 (J0701).
b. Run the showfaults command to obtain additional fault information.
The fault is captured by ALOM, where the fault is logged, the Service Required LED is lit, and the faulty component is disabled.
ok #. sc> showfaults -v ID Time FRU Fault 1 APR 24 12:47:27 MB/CMP0/CH0/R1/D0 MB/CMP0/CH0/R1/D0 deemed faulty and disabled |
In this example, MB/CMP0/CH0/R1/D0 is disabled. The system can boot using memory that was not disabled until the faulty component is replaced.
In maximum mode, POST detects and offlines memory devices with errors that could be correctable by PSH. Use the examples in this section to verify if the detected memory devices are correctable.
When using maximum mode, if no faults are detected, return POST to minimum mode.
If POST faults a single DIMM (CODE EXAMPLE 3-1) that was not part of a hardware upgrade or repair, it is likely that POST encountered a correctable error that can be handled by PSH.
sc> showfaults -v ID Time FRU Fault 1 OCT 13 12:47:27 MB/CMP0/CH0/R0/D0 MB/CMP0/CH0/R0/D0 deemed faulty and disabled |
In this case, reenable the DIMM and run POST in minimum mode as follows:
2. Return POST to minimum mode.
3. Reset the system so that POST runs.
There are several ways to initiate a reset. The following example uses the powercycle command. For other methods, refer to the Sun Fire T1000 Server Administration Guide.
4. Replace the DIMM if POST continues to fault the device in minimum mode.
If a detected device is part of a hardware upgrade or repair, or if POST detects multiple DIMMs (CODE EXAMPLE 3-2), replace the detected devices.
If the detected device is not a part of a hardware upgrade or repair, use the following list to examine and repair the fault:
1. If a detected device is not a DIMM, or if more than a single DIMM is detected, replace the detected devices.
2. If a detected device is a single DIMM and the same DIMM is also detected by PSH, replace the DIMM (CODE EXAMPLE 3-3).
3. If a device detected by POST is a single DIMM and the same DIMM is not detected by PSH, follow the procedure in Section 3.4.5.1, Correctable Errors for Single DIMMs.
After the detected devices are repaired or replaced, return POST to the default minimum level.
In most cases, when POST detects a faulty component, POST logs the fault and automatically takes the failed component out of operation by placing the component in the ASR blacklist (see Section 3.7, Managing Components With Automatic System Recovery Commands).
In most cases, after the faulty FRU is replaced, ALOM CMT detects the repair and extinguishes the Service Required LED. If ALOM CMT does not perform these actions, use the enablecomponent command to manually clear the fault and remove the component from the ASR blacklist. This procedure describes how to do this.
1. After replacing a faulty FRU, at the ALOM CMT prompt use the showfaults command to identify POST detected faults.
POST detected faults are distinguished from other kinds of faults by the text:
deemed faulty and disabled, and no UUID number is reported.
sc> showfaults -v ID Time FRU Fault 1 APR 24 12:47:27 MB/CMP0/CH0/R1/D0 MB/CMP0/CH0/R1/D0 deemed faulty and disabled |
2. Use the enablecomponent command to clear the fault and remove the component from the ASR blacklist.
Use the FRU name that was reported in the fault in the previous step.
The fault is cleared and should not appear when you run the showfaults command. Additionally, if there are no other faults remaining, the Service Required LED should be extinguished.
You must reboot the server for the enablecomponent command to take effect.
4. At the ALOM CMT prompt, use the showfaults command to verify that no faults are reported.
sc> showfaults Last POST run: THU MAR 09 16:52:44 2006 POST status: Passed all devices No failures found in System |
The Solaris Predictive Self-Healing (PSH) technology enables the server to diagnose problems while the Solaris OS is running, and mitigate many problems before they negatively affect operations.
The Solaris OS uses the fault manager daemon, fmd(1M), which starts at boot time and runs in the background to monitor the system. If a component generates an error, the daemon handles the error by correlating the error with data from previous errors and other related information to diagnose the problem. Once diagnosed, the fault manager daemon assigns the problem a Universal Unique Identifier (UUID) that distinguishes the problem across any set of systems. When possible, the fault manager daemon initiates steps to self-heal the failed component and take the component offline. The daemon also logs the fault to the syslogd daemon and provides a fault notification with a message ID (MSGID). You can use the message ID to get additional information about the problem from Sun's knowledge article database.
The Predictive Self-Healing technology covers the following server components:
The PSH console message provides the following information:
If the Solaris PSH facility detects a faulty component, use the fmdump command to identify the fault. Faulty FRUs are identified in fault messages using the FRU name. For a list of FRU names, see Appendix A.
When a PSH fault is detected, a Solaris console message similar to the following is displayed:
The following is an example of the ALOM CMT alert for the same PSH diagnosed fault:
The fmdump command displays the list of faults detected by the Solaris PSH facility and identifies the faulty FRU for a particular EVENT_ID (UUID). Do not use fmdump to verify a FRU replacement has cleared a fault because the output of fmdump is the same after the FRU has been replaced. Use the fmadm faulty command to verify the fault has cleared.
1. Check the event log using the fmdump command with -v for verbose output:
In this example, a fault is displayed, indicating the following details:
2. Use the Sun message ID to obtain more information about this type of fault.
a. In a browser, go to the Predictive Self-Healing Knowledge Article web site: http://www.sun.com/msg
b. Obtain the message ID from the console output or the ALOM CMT showfaults command.
c. Enter the message ID in the SUNW-MSG-ID field, and click Lookup.
In this example, the message ID SUN4V-8000-DX returns the following information for corrective action:
3. Follow the suggested actions to repair the fault.
When the Solaris PSH facility detects faults, the faults are logged and displayed on the console. After the fault condition is corrected, for example by replacing a faulty FRU, you must clear the fault.
1. After replacing a faulty FRU, power on the server.
2. At the ALOM CMT prompt, use the showfaults command to identify PSH detected faults.
PSH detected faults are distinguished from other kinds of faults by the text:
Host detected fault.
sc> showfaults -v ID Time FRU Fault 0 SEP 09 11:09:26 MB/CMP0/CH0/R1/D0 Host detected fault, MSGID: SUN4U-8000-2S UUID: 7ee0e46b-ea64-6565-e684-e996963f7b86 |
3. Run the clearfault command with the UUID provided in the showfaults output:
sc> clearfault 7ee0e46b-ea64-6565-e684-e996963f7b86 Clearing fault from all indicted FRUs... Fault cleared. |
4. Clear the fault from all persistent fault records.
In some cases, even though the fault is cleared, some persistent fault information remains and results in erroneous fault messages at boot time. To ensure that these messages are not displayed, perform the following command:
With the Solaris OS running on the server, you have the full compliment of Solaris OS files and commands available for collecting information and for troubleshooting.
If POST, ALOM, or the Solaris PSH features do not indicate the source of a fault, check the message buffer and log files for notifications for faults. Hard drive faults are usually captured by the Solaris message files.
Use the dmesg command to view the most recent system message. To view the system messages log file, view the contents of the /var/adm/messages file.
The dmesg command displays the most recent messages generated by the system.
The error logging daemon, syslogd, automatically records various system warnings, errors, and faults in message files. These messages can alert you to system problems such as a device that is about to fail.
The /var/adm directory contains several message files. The most recent messages are in the /var/adm/messages file. After a period of time (usually every ten days), a new messages file is automatically created. The original contents of the messages file are rotated to a file named messages.1. Over a period of time, the messages are further rotated to messages.2 and messages.3, and then deleted.
2. Issue the following command:
3. If you want to view all logged messages, issue the following command:
The Automatic System Recovery (ASR) feature enables the server to automatically configure failed components out of operation until they can be replaced. In the server, the following components are managed by the ASR feature:
The database that contains the list of disabled components is called the ASR blacklist (asr-db).
In most cases, POST automatically disables a faulty component. After the cause of the fault is repaired (FRU replacement, loose connector reseated, and so on), you must remove the component from the ASR blacklist.
The ASR commands (TABLE 3-7) enable you to view, and manually add or remove components from the ASR blacklist. These commands are run from the ALOM CMT sc> prompt.
showcomponent [3] |
|
Removes a component from the asr-db blacklist, where asrkey is the component to enable. |
|
Adds a component to the asr-db blacklist, where asrkey is the component to disable. |
|
The showcomponent command displays the system components (asrkeys) and reports their status.
At the sc> prompt, enter the showcomponent command.
Example with no disabled components:
Example showing a disabled component:
The disablecomponent command disables a component by adding it to the ASR blacklist.
1. At the sc> prompt, enter the disablecomponent command.
2. After receiving confirmation that the disablecomponent command is complete, reset the server so that the ASR command takes effect.
The enablecomponent command enables a disabled component by removing it from the ASR blacklist.
1. At the sc> prompt, enter the enablecomponent command.
2. After receiving confirmation that the enablecomponent command is complete, reset the server so that the ASR command takes effect.
Sometimes a server exhibits a problem that cannot be isolated definitively to a particular hardware or software component. In such cases, it might be useful to run a diagnostic tool that stresses the system by continuously running a comprehensive battery of tests. Sun provides the SunVTS software for this purpose.
This section describes the tasks necessary to use SunVTS software to exercise your server:
This procedure assumes that the Solaris OS is running on the server, and that you have access to the Solaris command line.
1. Check for the presence of SunVTS packages using the pkginfo command.
The following table lists the SunVTS packages:
If SunVTS is not installed, you can obtain the installation packages from the following places:
The SunVTS 6.1 software, and future compatible versions, are supported on the server.
SunVTS installation instructions are described in the SunVTS User's Guide.
Before you begin, the Solaris OS must be running. You also need to ensure that SunVTS validation test software is installed on your system. See Section 3.8.1, Checking Whether SunVTS Software Is Installed.
The SunVTS installation process requires that you specify one of two security schemes to use when running SunVTS. The security scheme you choose must be properly configured in the Solaris OS for you to run SunVTS. For details, refer to the SunVTS User's Guide.
SunVTS software features both character-based and graphics-based interfaces. This procedure assumes that you are using the graphical user interface (GUI) on a system running the Common Desktop Environment (CDE). For more information about the character-based SunVTS TTY interface, and specifically for instructions on accessing it by tip or telnet commands, refer to the SunVTS User's Guide.
SunVTS software can be run in several modes. This procedure assumes that you are using the default mode.
This procedure also assumes that the server is headless, that is, it is not equipped with a monitor capable of displaying bitmap graphics. In this case, you access the SunVTS GUI by logging in remotely from a machine that has a graphics display.
Finally, this procedure describes how to run SunVTS tests in general. Individual tests may presume the presence of specific hardware, or might require specific drivers, cables, or loopback connectors. For information about test options and prerequisites, refer to the following documentation:
1. Log in as superuser to a system with a graphics display.
The display system should be one with a frame buffer and monitor capable of displaying bitmap graphics such as those produced by the SunVTS GUI.
where test-system is the name of the server you plan to test.
3. Remotely log in to the server as superuser.
Use a command such as rlogin or telnet.
If you have installed SunVTS software in a location other than the default /opt directory, alter the path in the following command accordingly.
where display-system is the name of the machine through which you are remotely logged in to the server.
The SunVTS GUI is displayed (FIGURE 3-6).
5. Expand the test lists to see the individual tests.
The test selection area lists tests in categories, such as Network, as shown in FIGURE 3-7. To expand a category, left-click the icon (expand category icon) to the left of the category name.
FIGURE 3-7 SunVTS Test Selection Panel
6. (Optional) Select the tests you want to run.
Certain tests are enabled by default, and you can choose to accept these.
Alternatively, you can enable and disable individual tests or blocks of tests by clicking the checkbox next to the test name or test category name. Tests are enabled when checked, and disabled when not checked.
TABLE 3-8 lists tests that are especially useful to run on this server.
cmttest, cputest, fputest, iutest, l1dcachetest, dtlbtest, and l2sramtest--indirectly: mptest, and systest |
|
7. (Optional) Customize individual tests.
You can customize individual tests by right-clicking on the name of the test. For example, in FIGURE 3-7, right-clicking on the text string ce0(nettest) brings up a menu that enables you to configure this Ethernet test.
Click the Start button that is located at the top left of the SunVTS window. Status and error messages appear in the test messages area located across the bottom of the window. You can stop testing at any time by clicking the Stop button.
During testing, SunVTS software logs all status and error messages. To view these messages, click the Log button or select Log Files from the Reports menu. This action opens a log window from which you can choose to view the following logs:
Copyright © 2007, Sun Microsystems, Inc. All Rights Reserved.