Document revision date: 15 July 2002
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

OpenVMS Cluster Systems


Previous Contents Index

Example 8-11 Sample Interactive CLUSTER_CONFIG.COM Session to Convert a Standalone Computer to a Cluster Boot Server

$ @CLUSTER_CONFIG.COM
                 Cluster Configuration Procedure 
 
    Use CLUSTER_CONFIG.COM to set up or change an OpenVMS Cluster configuration. 
    To ensure that you have the required privileges, invoke this procedure 
    from the system manager's account. 
 
    Enter ? for help at any prompt. 
 
            1. ADD a node to the cluster. 
            2. REMOVE a node from the cluster. 
            3. CHANGE a cluster node's characteristics. 
            4. CREATE a second system disk for URANUS. 
            5. MAKE a directory structure for a new root on a system disk. 
            6. DELETE a root from a system disk. 
 
    Enter choice [1]: 3
    CHANGE Menu 
 
       1. Enable URANUS as a disk server. 
       2. Disable URANUS as a disk server. 
       3. Enable URANUS as a boot server. 
       4. Disable URANUS as a boot server. 
       5. Enable the LAN for cluster communications on URANUS. 
       6. Disable the LAN for cluster communications on URANUS. 
       7. Enable a quorum disk on URANUS. 
       8. Disable a quorum disk on URANUS. 
       9. Change URANUS's satellite's Ethernet or FDDI hardware address. 
      10. Enable URANUS as a tape server. 
      11. Disable URANUS as a tape server. 
      12. Change URANUS's ALLOCLASS value. 
      13. Change URANUS's TAPE_ALLOCLASS value. 
      14. Change URANUS's shared SCSI port allocation class value. 
      15. Enable Memory Channel for cluster communications on Uranus. 
      16. Disable Memory Channel for cluster communications on Uranus. 
 
    
    Enter choice [1]: 3
    This procedure sets up this standalone node to join an existing 
    cluster or to form a new cluster. 
 
What is the node's DECnet node name? PLUTO
What is the node's DECnet address? 2.5
Will the Ethernet be used for cluster communications (Y/N)? Y
Enter this cluster's group number: 3378
Enter this cluster's password: 
Re-enter this cluster's password for verification: 
Will PLUTO be a boot server [Y]? [Return]
Verifying circuits in network database...
Enter a value for PLUTO's ALLOCLASS parameter: 1
Does this cluster contain a quorum disk [N]? [Return]
    AUTOGEN computes the SYSGEN parameters for your configuration 
    and then reboots the system with the new parameters.

8.5 Creating a Duplicate System Disk

As you continue to add Alpha computers running on an Alpha common system disk or VAX computers running on a VAX common system disk, you eventually reach the disk's storage or I/O capacity. In that case, you want to add one or more common system disks to handle the increased load.

Reminder: Remember that a system disk cannot be shared between VAX and Alpha computers. An Alpha system cannot be created from a VAX system disk, and a VAX system cannot be created from an Alpha system disk.

8.5.1 Preparation

You can use either CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM to set up additional system disks. After you have coordinated cluster common files as described in Chapter 5, proceed as follows:

  1. Locate an appropriate scratch disk for use as an additional system disk.
  2. Log in as system manager.
  3. Invoke either CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM and select the CREATE option.

8.5.2 Example

As shown in Example 8-12, the cluster configuration command procedure:

  1. Prompts for the device names of the current and new system disks.
  2. Backs up the current system disk to the new one.
  3. Deletes all directory roots (except SYS0) from the new disk.
  4. Mounts the new disk clusterwide.

Note: OpenVMS RMS error messages are displayed while the procedure deletes directory files. You can ignore these messages.

Example 8-12 Sample Interactive CLUSTER_CONFIG.COM CREATE Session

$ @CLUSTER_CONFIG.COM
                 Cluster Configuration Procedure 
 
    Use CLUSTER_CONFIG.COM to set up or change an OpenVMS Cluster configuration. 
    To ensure that you have the required privileges, invoke this procedure 
    from the system manager's account. 
 
    Enter ? for help at any prompt. 
 
            1. ADD a node to the cluster. 
            2. REMOVE a node from the cluster. 
            3. CHANGE a cluster node's characteristics. 
            4. CREATE a second system disk for JUPITR. 
            5. MAKE a directory structure for a new root on a system disk. 
            6. DELETE a root from a system disk. 
 
    Enter choice [1]: 4
    The CREATE function generates a duplicate system disk. 
 
            o It backs up the current system disk to the new system disk. 
 
            o It then removes from the new system disk all system roots. 
 
    WARNING - Do not proceed unless you have defined appropriate 
              logical names for cluster common files in your 
              site-specific startup procedures. For instructions, 
              see the OpenVMS Cluster Systems manual. 
 
              Do you want to continue [N]? YES
    This procedure will now ask you for the device name of JUPITR's system root. 
    The default device name (DISK$VAXVMSRL5:) is the logical volume name of 
    SYS$SYSDEVICE:. 
 
What is the device name of the current system disk [DISK$VAXVMSRL5:]? [Return]
What is the device name for the new system disk? $1$DJA16:
%DCL-I-ALLOC, _$1$DJA16: allocated 
%MOUNT-I-MOUNTED, SCRATCH mounted on _$1$DJA16: 
What is the unique label for the new system disk [JUPITR_SYS2]? [Return]
Backing up the current system disk to the new system disk... 
 
Deleting all system roots...
              Deleting directory tree SYS1... 
 
%DELETE-I-FILDEL, $1$DJA16:<SYS0>DECNET.DIR;1 deleted (2 blocks) 
   .
   .
   .
System root SYS1 deleted. 
 
              Deleting directory tree SYS2... 
 
%DELETE-I-FILDEL, $1$DJA16:<SYS1>DECNET.DIR;1 deleted (2 blocks) 
   .
   .
   .
System root SYS2 deleted. 
 
All the roots have been deleted. 
%MOUNT-I-MOUNTED, JUPITR_SYS2  mounted on _$1$DJA16: 
 
The second system disk has been created and mounted clusterwide. 
Satellites can now be added.

8.6 Postconfiguration Tasks

Some configuration functions, such as adding or removing a voting member or enabling or disabling a quorum disk, require one or more additional operations.

These operations are listed in Table 8-10 and affect the integrity of the entire cluster. Follow the instructions in the table for the action you should take after executing either CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM to make major configuration changes.

Table 8-10 Actions Required to Reconfigure a Cluster
After running the cluster configuration procedure to... You should...
Add or remove a voting member Update the AUTOGEN parameter files and the current system parameter files for all nodes in the cluster, as described in Section 8.6.1.
Enable a quorum disk Perform the following steps:
  1. Update the AUTOGEN parameter files and the current system parameter files for all quorum watchers in the cluster, as described in Section 8.6.1.
  2. Reboot the nodes that have been enabled as quorum disk watchers (Section 2.3.8).

Reference: See also Section 8.2.4 for more information about adding a quorum disk.

Disable a quorum disk Perform the following steps:

Caution: Do not perform these steps until you are ready to reboot the entire OpenVMS Cluster system. Because you are reducing quorum for the cluster, the votes cast by the quorum disk being removed could cause cluster partitioning.

  1. Update the AUTOGEN parameter files and the current system parameter files for all quorum watchers in the cluster, as described in Section 8.6.1.
  2. Evaluate whether or not quorum will be lost without the quorum disk:
    IF... THEN...
    Quorum will not be lost Perform these steps:
    1. Use the DCL command SET CLUSTER/EXPECTED_VOTES to reduce the value of quorum.
    2. Reboot the nodes that have been disabled as quorum disk watchers. (Quorum disk watchers are described in Section 2.3.8.)
    Quorum will be lost Shut down and reboot the entire cluster.
    Reference: Cluster shutdown is described in Section 8.6.2.

Reference: See also Section 8.3.2 for more information about removing a quorum disk.

Add a satellite node Perform these steps:
  • Update the volatile network databases on other cluster members (Section 8.6.4).
  • Optionally, alter the satellite's local disk label (Section 8.6.5).
Enable or disable the LAN for cluster communications Update the current system parameter files and reboot the node on which you have enabled or disabled the LAN (Section 8.6.1).
Change allocation class values Refer to the appropriate section, as follows:
  • Change allocation class values on HSC subsystems (Section 6.2.2.2).
  • Change allocation class values on HSJ subsystems (Section 6.2.2.3).
  • Change allocation class values on DSSI ISE subsystems (Section 6.2.2.5).
  • Update the current system parameter files and shut down and reboot the entire cluster (Sections 8.6.1 and 8.6.2).
Change the cluster group number or password Shut down and reboot the entire cluster (Sections 8.6.2 and 8.6.7).

8.6.1 Updating Parameter Files

The cluster configuration command procedures (CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM) can be used to modify parameters in the AUTOGEN parameter file for the node on which it is run.

In some cases, such as when you add or remove a voting cluster member, or when you enable or disable a quorum disk, you must update the AUTOGEN files for all the other cluster members.

Use either of the methods described in the following table.
Method Description
Update MODPARAMS.DAT files Edit MODPARAMS.DAT in all cluster members' [SYS x.SYSEXE] directories and adjust the value for the EXPECTED_VOTES system parameter appropriately.

For example, if you add a voting member or if you enable a quorum disk, you must increment the value by the number of votes assigned to the new member (usually 1). If you add a voting member with one vote and enable a quorum disk with one vote on that computer, you must increment the value by 2.

Update AGEN$ files Update the parameter settings in the appropriate AGEN$ include files:
  • For satellites, edit SYS$MANAGER:AGEN$NEW_SATELLITE_DEFAULTS.DAT.
  • For nonsatellites, edit
    SYS$MANAGER:AGEN$NEW_NODE_DEFAULTS.DAT.

Reference: These files are described in Section 8.2.2.

You must also update the current system parameter files (VAXVMSSYS.PAR or ALPHAVMSSYS.PAR, as appropriate) so that the changes take effect on the next reboot.

Use either of the methods described in the following table.
Method Description
SYSMAN utility Perform the following steps:
  1. Log in as system manager.
  2. Run the SYSMAN utility to update the EXPECTED_VOTES system parameter on all nodes in the cluster. For example:
     $ RUN SYS$SYSTEM:SYSMAN
    
    %SYSMAN-I-ENV, current command environment:
    Clusterwide on local cluster
    Username SYSTEM will be used on nonlocal nodes

    SYSMAN> SET ENVIRONMENT/CLUSTER
    SYSMAN> PARAM USE CURRENT
    SYSMAN> PARAM SET EXPECTED_VOTES 2
    SYSMAN> PARAM WRITE CURRENT
    SYSMAN> EXIT
AUTOGEN utility Perform the following steps:
  1. Log in as system manager.
  2. Run the AUTOGEN utility to update the EXPECTED_VOTES system parameter on all nodes in the cluster. For example:
     $ RUN SYS$SYSTEM:SYSMAN
    
    %SYSMAN-I-ENV, current command environment:
    Clusterwide on local cluster
    Username SYSTEM will be used on nonlocal nodes

    SYSMAN> SET ENVIRONMENT/CLUSTER
    SYSMAN> DO @SYS$UPDATE:AUTOGEN GETDATA SETPARAMS
    SYSMAN> EXIT

Do not specify the SHUTDOWN or REBOOT option.

Hints: If your next action is to shut down the node, you can specify SHUTDOWN or REBOOT (in place of SETPARAMS) in the DO @SYS$UPDATE:AUTOGEN GETDATA command.

Both of these methods propagate the values to the computer's ALPHAVMSSYS.PAR file on Alpha computers or to the VAXVMSSYS.PAR file on VAX computers. In order for these changes to take effect, continue with the instructions in either Section 8.6.2 to shut down the cluster or in Section 8.6.3 to shut down the node.

8.6.2 Shutting Down the Cluster

Using the SYSMAN utility, you can shut down the entire cluster from a single node in the cluster. Follow these steps to perform an orderly shutdown:

  1. Log in to the system manager's account on any node in the cluster.
  2. Run the SYSMAN utility and specify the SET ENVIRONMENT/CLUSTER command. Be sure to specify the /CLUSTER_SHUTDOWN qualifier to the SHUTDOWN NODE command. For example:


$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> SET ENVIRONMENT/CLUSTER
%SYSMAN-I-ENV, current command environment:
  Clusterwide on local cluster
  Username SYSTEM will be used on nonlocal nodes
SYSMAN> SHUTDOWN NODE/CLUSTER_SHUTDOWN/MINUTES_TO_SHUTDOWN=5 -
_SYSMAN> /AUTOMATIC_REBOOT/REASON="Cluster Reconfiguration"
%SYSMAN-I-SHUTDOWN, SHUTDOWN request sent to node 
%SYSMAN-I-SHUTDOWN, SHUTDOWN request sent to node 
SYSMAN> 
 
SHUTDOWN message on JUPITR from user SYSTEM at JUPITR Batch   11:02:10 
JUPITR will shut down in 5 minutes; back up shortly via automatic reboot. 
Please log off node JUPITR. 
Cluster Reconfiguration 
SHUTDOWN message on JUPITR from user SYSTEM at JUPITR Batch   11:02:10 
PLUTO will shut down in 5 minutes; back up shortly via automatic reboot. 
Please log off node PLUTO. 
Cluster Reconfiguration

For more information, see Section 10.7.

8.6.3 Shutting Down a Single Node

To stop a single node in an OpenVMS Cluster, you can use either the SYSMAN SHUTDOWN NODE command with the appropriate SET ENVIRONMENT command or the SHUTDOWN command procedure. These methods are described in the following table.
Method Description
SYSMAN utility Follow these steps:
  1. Log in to the system manager's account on any node in the OpenVMS Cluster.
  2. Run the SYSMAN utility to shut down the node, as follows:
     $ RUN SYS$SYSTEM:SYSMAN
    
    SYSMAN> SET ENVIRONMENT/NODE=JUPITR
    Individual nodes: JUPITR
    Username SYSTEM will be used on nonlocal nodes

    SYSMAN> SHUTDOWN NODE/REASON="Maintenance" -
    _SYSMAN> /MINUTES_TO_SHUTDOWN=5

    Hint: To shut down a subset of nodes in the cluster, you can enter several node names (separated by commas) on the SET ENVIRONMENT/NODE command. The following command shuts down nodes JUPITR and SATURN:

    SYSMAN> SET ENVIRONMENT/NODE=(JUPITR,SATURN)
    
SHUTDOWN command procedure Follow these steps:
  1. Log in to the system manager's account on the node to be shut down.
  2. Invoke the SHUTDOWN command procedure as follows:
     $ @SYS$SYSTEM:SHUTDOWN
    

For more information, see Section 10.7.

8.6.4 Updating Network Data

Whenever you add a satellite, the cluster configuration command procedure you use (CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM) updates both the permanent and volatile remote node network databases (NETNODE_REMOTE.DAT) on the boot server. However, the volatile databases on other cluster members are not automatically updated.

To share the new data throughout the cluster, you must update the volatile databases on all other cluster members. Log in as system manager, invoke the SYSMAN utility, and enter the following commands at the SYSMAN> prompt:


$ RUN SYS$SYSTEM:SYSMAN
SYSMAN> SET ENVIRONMENT/CLUSTER
%SYSMAN-I-ENV, current command environment: 
        Clusterwide on local cluster 
        Username SYSTEM        will be used on nonlocal nodes
SYSMAN> SET PROFILE/PRIVILEGES=(OPER,SYSPRV)
SYSMAN> DO MCR NCP SET KNOWN NODES ALL
%SYSMAN-I-OUTPUT, command execution on node X...
   .
   .
   .
SYSMAN> EXIT
$ 

The file NETNODE_REMOTE.DAT must be located in the directory SYS$COMMON:[SYSEXE].

8.6.5 Altering Satellite Local Disk Labels

If you want to alter the volume label on a satellite node's local page and swap disk, follow these steps after the satellite has been added to the cluster:
Step Action
1 Log in as system manager and enter a DCL command in the following format: SET VOLUME/LABEL=volume-label device-spec[:]

Note: The SET VOLUME command requires write access (W) to the index file on the volume. If you are not the volume's owner, you must have either a system user identification code (UIC) or the SYSPRV privilege.

2 Update the [SYS n.SYSEXE]SATELLITE_PAGE.COM procedure on the boot server's system disk to reflect the new label.


+VAX specific
++Alpha specific

8.6.6 Changing Allocation Class Values

If you must change allocation class values on any HSC, HSJ, or DSSI ISE subsystem, you must do so while the entire cluster is shut down.

Reference: To change allocation class values:

8.6.7 Rebooting

The following table describes booting actions for satellite and storage subsystems:
For configurations with... You must...
HSC and HSJ subsystems Reboot each computer after all HSC and HSJ subsystems have been set and rebooted.
Satellite nodes Reboot boot servers before rebooting satellites.

Note that several new messages might appear. For example, if you have used the CLUSTER_CONFIG.COM CHANGE function to enable cluster communications over the LAN, one message reports that the LAN OpenVMS Cluster security database is being loaded.

Reference: See also Section 9.3 for more information about booting satellites.

DSSI ISE subsystems Reboot the system after all the DSSI ISE subsystems have been set.

For every disk-serving computer, a message reports that the MSCP server is being loaded.

To verify that all disks are being served in the manner in which you designed the configuration, at the system prompt ($) of the node serving the disks, enter the SHOW DEVICE/SERVED command. For example, the following display represents a DSSI configuration:


$ SHOW DEVICE/SERVED


Device:  Status  Total Size  Current  Max  Hosts 
$1$DIA0   Avail     1954050        0    0      0 
$1$DIA2   Avail     1800020        0    0      0 

Caution: If you boot a node into an existing OpenVMS Cluster using minimum startup (the system parameter STARTUP_P1 is set to MIN), a number of processes (for example, CACHE_SERVER, CLUSTER_SERVER, and CONFIGURE) are not started. Compaq recommends that you start these processes manually if you intend to run the node in an OpenVMS Cluster system. Running a node without these processes enabled prevents the cluster from functioning properly.

Reference: Refer to the OpenVMS System Manager's Manual for more information about starting these processes manually.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
4477PRO_016.HTML