HP OpenVMS Version 8.3 Release Notes


Previous Contents

3.11 Failure of AUDIT_SERVER to Initiate During Boot

V8.3

System managers should be aware that, during boot, if the AUDIT_SERVER fails to initiate for any reason, the startup process enters a retry loop that attempts to restart the AUDIT_SERVER until the condition preventing the initiate to complete is cleared and AUDIT_SERVER initiates correctly. This behavior is deliberate and is designed to prevent the system from running in a compromised security state.

Conditions that can prevent complete initiation include the following:

Clearing the condition might require manual intervention. The action required depends on the fault source. Corrective action can include restarting the AUDIT_SERVER processes on other cluster nodes or rebooting the affected node in MINIMUM state and manually correcting the fault. Corrupt database files can be identified by renaming the files and then restarting the AUDIT_SERVER. The server recreates the absent files and populates them with system default entries.

For more information about booting options, see Chapter 4 of HP OpenVMS System Manager's Manual, Volume 1: Essentials.


Chapter 4
System Management Release Notes

This chapter contains information that applies to system maintenance and management, performance management, and networking.

For information about new features included in this version of the software, refer to the HP OpenVMS Version 8.3 New Features and Documentation Overview.

4.1 Monitor Utility Changes

V8.3

The Monitor utility (MONITOR) has undergone several changes since OpenVMS Version 7.3-2. Most of these changes are related to providing improved formatting of the recording file and including additional class data. These changes have introduced some compatibility issues between data collected by one version of MONITOR that is subsequently processed by another version. This section discusses these issues.

4.1.1 Version-to-Version Compatibility of MONITOR Data

Because the body of data MONITOR collects can change at each release, it is not always possible to view MONITOR data collected in one version on a different version.

The level of compatibility between releases depends on whether you examine recorded binary data from a file (that is, playback) or live data from another cluster node. In general, playing back recorded data provides more compatibility than monitoring live remote data.

4.1.2 Playing Back Data from a Recording File

Each file of recorded MONITOR binary data is identified by a MONITOR recording file-structure level ID. You can see this ID by entering the DCL command DUMP /HEADER /PAGE on the file. The following table lists ome recent MONITOR versions and their associated structure level IDs:
Operating System Version MONITOR Recording File Structure ID
OpenVMS Version 7.3-2 with remedial kit 1 MON32050
OpenVMS Versions 8.2, 8.2-1 with remedial kit 1 MON01060


1These remedial kits are proposed kits that might be issued for the sole purpose of providing improved compatibility.

Usually, for you to be able to play back a single MONITOR recording file, the last two digits of the structure level ID must match those of the running MONITOR version. For example, if you are running OpenVMS Version 7.3-2, you can play back a file from Version 7.3-2 but not one from Version 8.2.

However, MONITOR Versions 8.2 and higher are specially coded to read recording files with structure level IDs ending in "50." In addition, a utility in SYS$EXAMPLES, called MONITOR_CONVERT.C, converts a MONxx060 file to a MON31050 file. This allows the resulting file to be read by versions prior to Version 8.2. See MONITOR_CONVERT.C for instructions for building and running the program.

Note that, even though you are allowed to play back a file, certain MONITOR data classes within the file might not be available. This can happen if you are using an older MONITOR version to play back a file created by a newer MONITOR version.

Finally, note that, when you produce a multifile summary from several recording files, all 8 characters of the structure level ID from all the files must match.

4.1.3 Monitoring Live Remote Data across a VMScluster

V8.3

In addition to the recording file structure level ID, each MONITOR version also has an associated "server version number." The server version number identifies the version of MONITOR data to make it possible to serve live data from one node to another in an OpenVMS Cluster. For you to monitor data from another cluster node, both the monitoring node and the monitored node must have the same server version number. If the versions are not the same, the following error message is displayed:


  %MONITOR-E-SRVMISMATCH, MONITOR server on remote node is an 
                          incompatible version 

Some recent MONITOR versions and their associated server version numbers are in the following table:
Operating System Version MONITOR Server Version Number
OpenVMS Version 7.3-2 5
OpenVMS Version 7.3-2 with remedial kit 1 7
OpenVMS Version 8.2, 8.2-1 with remedial kit 1 8


1These remedial kits are proposed kits, which might be issued for the sole purpose of providing improved compatibility.

If cross-node live monitoring is not possible because incompatibility of versions, you might be able to view the statistics you want through a combination of recording (to a file) and playback.

4.2 Updated Recommended File Security Attributes

V8.3

The following table lists updated recommended protection profiles for files listed in the HP OpenVMS Guide to System Security. (The file VMS$PASSWORD_HISTORY.DATA is omitted from the current version of the manual but will be included in the next revision.)
File Protection
RIGHTSLIST.DAT S:RWE,O:RWE,G,W
SYSUAF.DAT S:RWE,O:RWE,G,W
VMS$OBJECTS.DAT S:RWE,O:RWE,G:RE,W
VMS$PASSWORD_HISTORY.DATA S:RWE,O:RWE,G,W

The file owner should be a UIC with a group within the system range (less than MAXSYSGROUP system parameter). Values of [1,1] or [SYSTEM] (1,4) are recommended.

4.3 System Management Notes

The following sections describe updates to system management and maintenance.

4.3.1 CIMSERVER Process Recommendation (I64 Only)

V8.3

For optimal performance, HP recommends that the account running the CIMSERVER process (usually the SYSTEM account) have a PGFLQUOTA of at least 1 GB (PGFLQUOTA = 2000000).

This restriction will be lifted with the next release of WBEM Services for OpenVMS.

4.4 Recovering from System Hangs or Crashes (I64 Only)

V8.2

If your system hangs and you want to force a crash, press Ctrl/P from the console. The method of forcing a crash dump varies depending on whether XDELTA is loaded.

If XDELTA is loaded, pressing Ctrl/P causes the system to enter XDELTA. The system displays the instruction pointer and the current instruction. You can force a crash from XDELTA by entering ;C, as shown in the following example:


$ 
 
Console Brk at 8068AD40 
 
8068AD40!       add      r16 = r24, r16 ;;  (New IPL = 3) 
 
;C 

If XDELTA is not loaded, pressing Ctrl/P causes the system to respond with the prompt "Crash? (Y/N)". Entering Y causes the system to crash. Entering any other character has no effect on the system.

4.5 DECdtm/XA with Oracle 8i and 9i (Alpha Only)

V7.3-2

When you are using DECdtm/XA to coordinate transactions with the Oracle® 8i/9i XA Compliant Resource Manager (RM), do not use the dynamic registration XA switch (xaoswd). Version 9.0.1.0.0 of the Oracle shareable library that supports dynamic registration does not work. Always use the static registration XA switch (xaosw) to bind the Oracle RM to the DECdtm/XA Veneer.

The DECdtm/XA V2.1 Gateway now has clusterwide transaction recovery support. Transactions from applications that use a clusterwide DECdtm Gateway Domain Log can now be recovered from any single-node failure. Gateway servers running on the remaining cluster nodes can initiate the transaction recovery process on behalf of the failed node.

4.6 Device Unit Number Maximum Increased

V8.2

In the past, OpenVMS would never create more than 10,000 cloned device units, and unit numbers would wrap after 9999. This had become a limitation for some devices, such as mailboxes or TCP/IP sockets.

Starting with OpenVMS Version 7.3-2, OpenVMS will create up to 32,767 devices if the DEV$V_NNM bit is clear in UCB$L_DEVCHAR2 and if bit 2 is clear in the DEVICE_NAMING system parameter. This does not require any device driver change.

However, programs and command procedures that are coded to assume a maximum device number of 9999 may need to be modified.

4.7 ECP Data Collector and Performance Analyzer V5.5 (Alpha Only)

V8.2

Version 5.5 is the recommended version of the Enterprise Capacity and Performance (ECP) Analyzer for OpenVMS Alpha Version 8.2 and higher. Version 5.5 is backward compatible with OpenVMS Version 6.2 and higher.

Starting with OpenVMS Version 8.2, the Performance Data Collector (TDC) Version 2.1 replaces the ECP Collector. ECP Analyzer can analyze collection files created by TDC Version 2.1 and later.

The ECP Analyzer currently is not supported on OpenVMS I64.

4.8 EDIT/FDL: Fixing Recommended Bucket Size

V7.3

Prior to OpenVMS Version 7.3, when running EDIT/FDL, the calculated bucket sizes were always rounded up to the closest disk-cluster boundary, with a maximum bucket size of 63. This could cause problems when the disk-cluster size was large, but the "natural" bucket size for the file was small, because the bucket size was rounded up to a much larger value than required. Larger bucket sizes increase record and bucket lock contention, and can seriously impact performance.

OpenVMS Version 7.3 or higher modifies the algorithms for calculating the recommended bucket size to suggest a more reasonable size when the disk cluster is large.

4.9 EFI$CP Utility: Use Not Recommended

V8.2

The OpenVMS EFI$CP utility is presently considered undocumented and unsupported. HP recommends against using this utility. Certain privileged operations within this utility could render OpenVMS I64 unbootable.

4.10 EFI Shell Precautions on Shared or Shadowed System Disks

V8.2-1

On each Integrity system disk, there can exist up to two FAT partitions that contain OpenVMS boot loaders, EFI applications and hardware diagnostics. The OpenVMS bootstrap partition and, when present, the diagnostics partition are respectively mapped to the following container files on the OpenVMS system disk:

SYS$LOADABLE_IMAGES:SYS$EFI.SYS
SYS$MAINTENANCE:SYS$DIAGNOSTICS.SYS

The contents of these FAT partitions appear as fsn: devices at the console EFI Shell> prompt. These fsn: devices can be directly modified by user command input at EFI Shell> prompt, and by EFI console or EFI diagnostic applications. Neither OpenVMS nor any EFI console environments that might share the system disk are notified of partition modifications; OpenVMS and console environments are entirely unaware of these console modifications. Accordingly, you must ensure the proper coordination and proper synchronization of the changes with OpenVMS and with any other EFI consoles that might be in use.

You must take precautions when modifying the console in configurations using either or both of the following:

You must preemptively reduce these OpenVMS system disk environments to a single-member host-based volume shadowset or to a non-shadowed system disk, and you must externally coordinate access to avoid parallel accesses to the Shell> prompt whenever making shell-level modifications to the fsn: devices, such as:

If you do not take these precautions, any modifications made within the fsn device associated with the boot partition or the device associated with the diagnostic partition can be overwritten and lost immediately or after the next OpenVMS host-based volume shadowing full-merge operation.

For example, when the system disk is shadowed and changes are made by the EFI console shell to the contents of these container files on one of the physical members, the volume shadowing software has no knowledge that a write was done to a physical device. If the system disk is a multiple member shadow set, you must make the same changes to all of the other physical devices that are the current shadow set members. If this is not done, when a full merge operation is next performed on that system disk, the contents of these files might regress. The merge operation might occur many days or weeks after any EFI changes are done.

Furthermore, if a full merge is active on the shadowed system disk, you should not make changes to either file using the console EFI shell.

To suspend a full merge that is in progress or to determine the membership of a shadow set, see the HBMM chapter of the HP OpenVMS Version 8.2 New Features and Documentation Overview.

These precautions apply only to Integrity system disks that are configured for host-based volume shadowing, or that are configured and shared across multiple OpenVMS I64 systems. Configurations that are using controller-based RAID, that are not using host-based shadowing with the system disk, or that are not shared with other OpenVMS I64 systems, are not affected.

4.11 Error Log Viewer (ELV) Utility: TRANSLATE/PAGE Command

V7.3-2

If a message is signaled while you are viewing a report using the /PAGE qualifier with the TRANSLATE command, the display might become corrupted. The workaround for this problem is to refresh the display using Ctrl/W.

If you press Ctrl/Z immediately after a message is signaled, the program abruptly terminates. The workaround for this problem is to scroll past the signaled message before pressing Ctrl/Z.

4.12 External Authentication

This section contains release notes pertaining to external authentication. External authentication is an optional feature introduced in OpenVMS Version 7.1 that enables OpenVMS systems to authenticate designated users with their external user IDs and passwords. For detailed information about using external authentication, see the HP OpenVMS Guide to System Security. Also see Section 2.13.1 for a release note related to external authentication.

4.12.1 I64 External Authentication Support

V8.2

The Advanced Server for OpenVMS V7.3A ECO4 (and later) product kit contains standalone external authentication software for I64 systems in an OpenVMS cluster.

If you want to enable NT LAN Manager external authentication on OpenVMS Cluster member nodes running I64, you must copy the I64 standalone external authentication images from an Alpha system on which the Advanced Server is installed to the I64 member node, and complete the setup as described in the Advanced Server kit release notes.

4.12.2 SET PASSWORD Behavior Within a DECterm Terminal Session

V7.2

A DECterm terminal session does not have access to the external user name used for login and must prompt for one during SET PASSWORD operations. The external user name defaults to the process's OpenVMS user name. If the default is not appropriate (that is, if the external user name and mapped OpenVMS user name are different), you must enter the correct external user name.

The following example shows a SET PASSWORD operation initiated by a user with the external user name JOHN_DOE. The mapped OpenVMS user name is JOHNDOE and is the default used by the SET PASSWORD operation. In this case, the default is incorrect and the actual external user name was specified by the user.


$ set password 
External user name not known; Specify one (Y/N)[Y]? Y 
External user name [JOHNDOE]: JOHN_DOE 
Old password: 
New password: 
Verification: 
%SET-I-SNDEXTAUTH, Sending password request to external authenticator 
%SET-I-TRYPWDSYNCH, Attempting password synchronization 
$ 

4.12.3 No Password Expiration Notification on Workstations

V7.1

In the LAN Manager domain, a user cannot log in once a password expires.

PC users receive notification of impending external user password expiration and can change passwords before they expire. However, when a user logs in from an OpenVMS workstation using external authentication, the login process cannot determine whether the external password is about to expire. Therefore, sites that enforce password expiration and whose users do not primarily use PCs can choose not to use external authentication for workstation users.

4.13 OpenVMS Cluster Systems

The release notes in this section pertain to OpenVMS Cluster systems.

4.13.1 OpenVMS I64 Cluster Support

V8.2

With few exceptions, OpenVMS Cluster software provides the same features on OpenVMS I64 systems as it offers on OpenVMS Alpha and OpenVMS VAX systems.

4.13.2 Temporary Exceptions

V8.2

The following exceptions are temporary:

4.13.3 Satellite Booting and LAN Failover

V8.3

For Alpha and Integrity cluster satellites, the network boot device cannot be a prospective member of a LAN Failover Set. For example, if you create a LAN Failover Set, LLA consisting of EWA and EWB, to be active when the system boots, you cannot boot the system as a satellite over the LAN devices EWA or EWB.

4.13.4 Creation of Error Log Dump File for Integrity Server Satellite Systems

V8.3

Integrity server satellite systems require DOSD (Dump Off the System Disk) for both the system crash dump file and system error log. Autogen will create the system dump on an appropriate disk once DOSD is enabled, but does not attempt to create the error log dump file (SYS$ERRORLOG.DMP). In order to preserve error-log entries across system failure, the error log dump file must be created by hand.

See the HP OpenVMS System Manager's Manual, Volume 2: Tuning, Monitoring, and Complex Systems for information on enabling DOSD. After running AUTOGEN to create the dump file on the appropriate device, create the error log dump as follows:

  1. Use SYSGEN to obtain the values of the the system parameters ERRORLOGBUF_S2 and ERLBUFFERPAG_S2.
  2. Calculate (ERRORLOGBUF_S2 * ERLBUFFERPAG_S2) + 10.
  3. Use the SYSGEN CREATE command to create the file as follows:


    SYSGEN> CREATE dev:[SYSn.SYSEXE]SYS$ERRLOG.DMP/SIZE=filesize 
    

    where


    dev = a device in the DOSD list 
    n = the system root for the satellite 
    filesize = the value calculated in step 2. 
    

    HP anticipates that AUTOGEN will be enhanced in the future to perform this operation.


Previous Next Contents