	OpenVMS Alpha NUMA Programming Whitepaper


Author:  Karen L. Noel
Date:    1-Sep-2000
Version: 1.0

1 Introduction

The C source files rad_routines, rad_crmpsc, and rad_creprc are examples that 
demonstrate how to program to the OpenVMS Alpha NUMA (Non-Uniform Memory 
Access) system service interface. 

This whitepaper explains how the sample code in these files can be used 
to build other NUMA APIs, so that programmers can code to a simpler API than 
the system services themselves.

These example programs will be available in a future release of OpenVMS Alpha.

2 Background

A RAD (Resource Affinity Domain) is a software grouping of physical resources
that have similar access characteristics. On AlphaServer GS80/160/320 systems,
a RAD is the same as a Quad Building Block (QBB). One RAD can contain CPUs, 
memory, and/or I/O devices. RADs are numbered from 0 to the maximum number of 
RADs minus 1.

OpenVMS application support for RADs was first introduced in OpenVMS Alpha
Version 7.2-1H1, which was also the first release that supported the 
AlphaServer GS80/160/320 systems. 

Operating system versions prior to Version 7.2-1H1 did not contain RAD 
support and did not support any NUMA platforms. To simplify an 
application model, a program can assume that a system running on an 
OpenVMS Alpha version prior to 7.2-1H1 and that a non-NUMA system 
contains only 1 RAD.

3 C Programming notes

Each source module is programmed using Compaq C for OpenVMS Alpha. The 
symbol __NEW_STARLET is defined for each module so that stronger C 
typing can be used than if __NEW_STARLET were not defined. 

Each source module is compiled with the /pointer=64 switch. This switch 
causes all pointer types to become 64-bits wide. C run-time functions, 
such as malloc() also become 64-bit routines. When malloc() is called, 
memory from OpenVMS 64-bit process space, P2 space, is allocated.

The example source modules use 64-bit item list types and 64-bit string 
descriptor types. This was not necessary in cases where the pointers 
within the structures are in 32-bit address space. However, 32-bit 
pointers can be stored in the larger 64-bit fields and interpreted 
correctly by the system services.

The example source modules use the event flag EFN$C_ENF, which specifies 
no event flag. This is a recommended optimization used to avoid event flag 
processing where it is not necessary.

4 RAD_ROUTINES

The source file rad_routines.c contains 5 basic information functions:

1. get_max_rads - return maximum number of RADs on system
2. get_home_rad - return current process's home RAD
3. get_rad_mem  - return amount of operating system private memory in each RAD  
4. get_max_cpus - return maximum number of CPUs on system
5. get_rad_cpus - return number of active CPUs in each RAD

4.1 get_max_rads()

The get_max_rads() function returns the maximum number of RADs on the 
system. The value returned is constant for the life of the system. The 
caller can obtain this number once and be assured that the value will 
not change while the program is executing.

	static int max_rads=0;

The global variable max_rads is initially 0 to indicate we have not 
yet obtained this value from the system. 

        /* Max RADs is a system constant, don't ask more than once */
        if (max_rads != 0) return (max_rads);

If get_max_rads() has already been called, max_rads is non-zero and we 
simply return the number stored in the global variable. 

        /* Local variables */
        ILEB_64 item_list[2];
        unsigned __int64 return_length;

The local array, item_list, contains 64-bit item list elements. ILEB_64 
is defined in iledef.h. With a 64-bit item list, the return length is 
64-bits wide.

        /* Set up RAD_MAX_RADS item list */
        item_list[0].ileb_64$w_mbo      = 1;
        item_list[0].ileb_64$l_mbmo     = -1;
        item_list[0].ileb_64$q_length   = 4;
        item_list[0].ileb_64$w_code     = SYI$_RAD_MAX_RADS;
        item_list[0].ileb_64$pq_bufaddr = &max_rads;
        item_list[0].ileb_64$pq_retlen_addr = &return_length;
        item_list[1].ileb_64$w_mbo      = 0;
        item_list[1].ileb_64$l_mbmo     = 0;
        item_list[1].ileb_64$q_length   = 0;
        item_list[1].ileb_64$w_code     = 0;
        item_list[1].ileb_64$pq_bufaddr = 0;
        item_list[1].ileb_64$pq_retlen_addr = 0;

        /* Call sys$getsyiw to get the maximum number of RADs */
        status = sys$getsyiw (
                        EFN$C_ENF,      /* efn                  */
                        0,              /* csiadr               */
                        0,              /* nodename             */
                        item_list,      /* itmlst               */
                        0,              /* I/O status block     */
                        0,              /* AST address          */
                        0);             /* AST parameter        */

The first time get_max_rads() is called, we call sys$getsyiw specifying 
no event flag, our current node, the item list array we set up before, 
no I/O status block, and no AST. We do not expect this system service 
to require any waiting.

        /* If RAD_MAX_RADS not supported, assume 1 RAD */
        if (status == SS$_BADPARAM)
        {
            max_rads = 1;
            status = SS$_NORMAL;
        }

If the item code SYI$_RAD_MAX_RADS is not supported by sys$getsyiw, the 
status value SS$_BADPARAM is returned. This indicates that the system 
version is earlier than OpenVMS Alpha V7.2-1H1. In this case, we 
indicate that the system only contains 1 RAD. 

4.2 get_home_rad()

The get_home_rad() function returns the current process's home RAD. 
Each process on a NUMA system is assigned one RAD as its home. OpenVMS 
Alpha memory managmenet code reads the process's home RAD to determine 
from which RAD to allocate memory for the process. OpenVMS process 
scheduling code attempts to schedule processes on a CPU within the 
process's home RAD. 

An application may wish to obtain its home RAD for purposes of creating 
other related processes on the same RAD or for creating global section 
memory on the same RAD. 

The function get_home_rad() does not store the process's home RAD in a 
global variable becuase the home RAD may differ each time the function 
is called. A process's home RAD can be changed with the system serivce 
sys$set_process_properties and with the DCL command SET PROCESS.

        /* If only one RAD, our home RAD must be 0 */
        if (get_max_rads() == 1)
            return (0);

All processes on a system with one RAD (or a non-NUMA system) are 
assigned a home RAD of 0. Returning 0 when the system only has 1 RAD 
serves 2 purposes. It avoids the expense of setting up for the call to 
sys$getjpiw and calling sys$getjpiw. It also prevents sys$getjpiw from 
being called on OpenVMS Alpha versions that do not support the item 
code JPI$_HOME_RAD. 

        /* Set up HOME_RAD item list */
        item_list[0].ileb_64$q_length   = 4;
        item_list[0].ileb_64$w_code     = JPI$_HOME_RAD;
        item_list[0].ileb_64$pq_bufaddr = &home_rad;
	...
        /* Call sys$getjpiw to get this process's home RAD */
        status = sys$getjpiw (
                        EFN$C_ENF,      /* efn                  */
                        0,              /* pidadr               */
                        0,              /* prcnam               */
                        item_list,      /* itmlst               */
                        0,              /* I/O status block     */
                        0,              /* AST address          */
                        0);             /* AST parameter        */

We call sys$getjpiw specifying no event flag, our current process, the 
item list array we set up before, no I/O status block, and no AST. We 
do not expect this system serivce to require any waiting.

4.3 get_rad_mem()

The get_rad_mem() function returns the number of Alpha pages within 
each RAD. The values returned may not be constant for the life of the 
system. In future releases of OpenVMS Alpha, memory may be out-swapped 
or reassigned to other operating system instances in a Galaxy 
environment.

        /* Check the length of the user's buffer */
        if (buffer_length < get_max_rads()*sizeof(int))
            return (SS$_BUFFEROVF);

The caller must supply a buffer large enough to hold an array of 
integers indexed by RAD. Otherwise, the error status SS$_BUFFEROVR is 
returned.

        /* If only one RAD, just get system memory size */
        if (get_max_rads() == 1)
        {
	    ...
            item_list[0].ileb_64$q_length        = 4;
            item_list[0].ileb_64$w_code          = SYI$_MEMSIZE;
            item_list[0].ileb_64$pq_bufaddr      = &memsize;
            ...
            /* Call sys$getsyiw to get memsize */
            status = sys$getsyiw (
	    ...

If the system only has one RAD, we know that all memory is in RAD 0. 
The system may also not support the SYI$_RAD_MEMSIZE parameter to 
sys$getsyiw. In this case we use the SYI$_MEMSIZE parameter to get the 
total system memory size.

            /* On success, return page count in the user's buffer */
            if (status&1)
                buffer[0] = memsize;

If the system only has one RAD, we return the page count in buffer[0].

        /* Local type definition */
        typedef struct _rad_mem_pair {
            int rad_id;
            int page_count;
        } RAD_MEM_PAIR;

        /* Local variables */
	...
        RAD_MEM_PAIR * rad_mem_buffer;

If the system has more than one RAD, we call sys$getsyiw() to obtain 
the number of pages within each RAD. The format of the return buffer 
for the SYI$_RAD_MEMSIZE item is an array of RAD id and page count 
pairs. The RAD_MEM_PAIR type is defined locally because it is not 
supplied by sys$starlet_c.tlb and it is only used by this one function.

        /* Allocate RAD/MEM array */
        rad_mem_buffer = malloc (get_max_rads()*sizeof(RAD_MEM_PAIR));
        if (rad_mem_buffer == 0) return (SS$_INSFMEM);

We allocate an array with malloc() to hold the return information from
sys$getsyiw. On failure from malloc(), SS$_INSFMEM is returned.

        /* Set up RAD_MEMSIZE item list */
        ...
        item_list[0].ileb_64$q_length  = max_rads*sizeof(RAD_MEM_PAIR);
        item_list[0].ileb_64$w_code     = SYI$_RAD_MEMSIZE;
        item_list[0].ileb_64$pq_bufaddr = rad_mem_buffer;
        ...
        /* Call sys$getsyiw to get RAD/MEM array */
        status = sys$getsyiw (
	...

We call the system service sys$getsyiw with the item code 
SYI$_RAD_MEMSIZE. The information is returned in the rad_mem_buffer 
array.

	/* For each RAD, add up the page count */
        for (rad=0; rad<get_max_rads(); rad++)
        {
            buffer[rad] = 0;
            for (i=0; i<get_max_rads(); i++)
           	if (rad_mem_buffer[i].rad_id == rad)
                    buffer[rad] += rad_mem_buffer[i].page_count;
        }
        
We loop through the RADs, adding up the number of pages reported for 
each RAD.

Each array element in the caller's return buffer represents the number 
of Alpha pages in the RAD. If the caller then wants to store the result 
in another unit, such as bytes, the Alpha system's page size can be 
obtained. The page size multiplied by the page count provides the byte 
count. Note that bytes must be stored in a 64-bit integer type to 
support systems with more than 4 GB of memory.

Another function, get_page_size() can be included that returns the page 
size obtained from sys$getsyiw(). The page size is constant for the 
system, so it can be obtained once and stored in a global variable. 

4.4 get_max_cpus()

The get_max_cpus() function returns the maximum number of CPUs on the 
system. It is a local function because it is only used by the 
get_rad_cpus() routine. It can be changed into a global function if 
required by other modules in your application.

The value returned is constant for the system. The caller can obtain 
this number once and be assured that the value will not change while 
the program is executing.

	static int max_cpus=0;

The global variable max_cpus is initially 0 to indicate we have not 
yet obtained this value from the system. 

        /* Max CPUs is a system constant, don't ask more than once */
        if (max_cpus != 0) return (max_cpus);

If get_max_cpus() has already been called, max_cpus is non-zero and we 
simply return the number stored in the global variable. 

        /* Set up MAX_CPUS item list */
        ...
        item_list[0].ileb_64$q_length   = 4;
        item_list[0].ileb_64$w_code     = SYI$_MAX_CPUS;
        item_list[0].ileb_64$pq_bufaddr = &max_cpus;
	...
        /* Call sys$getsyiw to get the maximum number of CPUs */
        status = sys$getsyiw (
	...

The first time get_max_cpus() is called, we call sys$getsyiw specifying 
the item code SYI$_MAX_CPUS. 

Unlike the item code SYI$_RAD_MAX_RADS, the item code SYI$_MAX_CPUS has 
been supported by OpenVMS Alpha since Version 7.0. Therefore, the error 
status SS$_BADPARAM is not ever expected unless the program can be run 
on versions of OpenVMS Alpha prior to Version 7.0. 

4.5 get_rad_cpus()

The get_rad_cpus() function returns the number of CPUs within each RAD. 
The values returned may not be constant for the life of the system. 
CPUs may be stopped or reassigned to other operating system instances 
in a Galaxy environment.

        /* Check the length of the user's buffer */
        if (buffer_length < get_max_rads()*sizeof(int))
            return (SS$_BUFFEROVF);

The caller must supply a buffer large enough to hold an array of 
integers indexed by RAD. Otherwise, the error status SS$_BUFFEROVR is 
returned.

        /* Set up ACTIVE_CPU_MASK item list */
        ...
        item_list[0].ileb_64$q_length   = 8;
        item_list[0].ileb_64$w_code     = SYI$_ACTIVE_CPU_MASK;
        item_list[0].ileb_64$pq_bufaddr = &active_cpu_mask;
        ...
        /* Call sys$getsyiw to get active cpu mask */
        status = sys$getsyiw (
	...

The system serivce sys$getsyiw is called to obtain the mask of active 
CPUs on the system. The result is stored in the local variable 
active_cpu_mask. Each bit set in the mask represents a CPU that is 
currently active. A CPU may be cleared from this mask if it is stopped 
or reassigned to another Galaxy instance.

        /* If only one RAD, all active CPUs are in RAD 0 */
        if (get_max_rads() == 1)
        {
            /* Count the number of CPUs in the active CPU mask */
            buffer[0] = 0;
            for (cpu=0; cpu<get_max_cpus(); cpu++)
            {
                if ((active_cpu_mask>>cpu)&1)
                    buffer[0]++;
            }
            return (SS$_NORMAL);
        }

If the system only has one RAD, we know that all CPUs are in RAD 0. The 
system may not support the SYI$_RAD_CPUS parameter to sys$getsyiw and 
we avoid using this item code on systems that do not support it. We 
return the number of active CPUs in buffer[0].

	typedef struct _rad_cpu_id_pair {
   	    int rad_id;
   	    int cpu_id;
	} RAD_CPU_ID_PAIR;
	static RAD_CPU_ID_PAIR * rad_cpu_id_buffer=0;

The format of the return buffer for the SYI$_RAD_CPUS item is an array 
of RAD ID and CPU ID pairs. The CPUs in this array represent potential 
CPUs, not active CPUs. A potential CPU is one that can become active in 
the system without shutting down the operating system. A CPU can be 
reassigned to this operating system instance in a Galaxy environment 
and become active. Or, a CPU that was stopped can be restarted.

The RAD_CPU_ID_PAIR type is defined in the rad_routines.c module 
because it is not supplied by sys$starlet_c.tlb. 

The global variable, rad_cpu_id_buffer, is initially 0 to indicate that 
the buffer has not yet been allocated and filled in with information 
obtained from the system. The potential CPU IDs and RAD IDs returned in 
this array never change. We can obtain this array once and be assured 
that the values will not change while the program is executing.

            /* Allocate RAD/CPU buffer */
            rad_cpu_id_buffer =
                malloc ((get_max_cpus()+1)*sizeof(RAD_CPU_ID_PAIR));
            if (rad_cpu_id_buffer == 0) return (SS$_INSFMEM);

An array is allocated with malloc() to hold the return information from
sys$getsyiw. On failure from malloc(), SS$_INSFMEM is returned.

           /* Set up RAD_CPUS item list */
           ...
           item_list[0].ileb_64$q_length       = 
		(max_cpus+1)*sizeof(RAD_CPU_ID_PAIR);
           item_list[0].ileb_64$w_code         = SYI$_RAD_CPUS;
           item_list[0].ileb_64$pq_bufaddr     = rad_cpu_id_buffer;
	   ...
           /* Call sys$getsyiw to get the RAD/CPU info */
           status = sys$getsyiw (
	   ...

We call the system service sys$getsyiw to obtain the array of RAD IDs 
and potential CPU IDs. The result is stored in the buffer pointed to by 
rad_cpu_id_buffer. 

            /* On error, free memory and return */
            if (!(status&1))
            {
                free (rad_cpu_id_buffer);
		    rad_cpu_id_buffer=0;
                return (status);
            }

No errors are expected from this call. If an error occurs, we free the 
buffer allocated, clear the global pointer, and return the error 
status.

        /* Loop through array counting active CPUs in each RAD */
        i = 0;
        while (rad_cpu_id_buffer[i].cpu_id != -1)
        {
           /* Get a RAD/CPU pair */
           rad = rad_cpu_id_buffer[i].rad_id;
           cpu = rad_cpu_id_buffer[i].cpu_id;

           /* Count this CPU if it is in the active cpu mask */
           if ((active_cpu_mask>>cpu)&1)
                buffer[rad]++;
           i++;
        }

For a CPU to be counted in a RAD, it must be an active CPU. Each active 
CPU is also a potential CPU and has an entry in the rad_cpu_id_buffer 
array. All active CPUs are counted in the caller's return buffer for 
one of the RADs.

5 RAD_CRMPSC

The source file rad_crmpsc.c contains one support function, 
create_mres(), and a main routine. It calls two functions from 
rad_routines.c, get_home_rad() and get_max_rads().

5.1 create_mres()

The routine create_mres() creates and maps a memory-resident global 
section within the RAD specified. In OpenVMS Alpha Version 7.2-1H1, a 
memory-resident global section is the only type of global section that 
accepts a RAD_HINT argument. Future versions of OpenVMS Alpha may allow 
specifying a RAD for other types of global sections such as pagefile 
backed and file backed global sections.

        /* Declare global section name descriptor */
        char secnam_text[] =  "rad_crmpsc_0";
        struct dsc64$descriptor_s secnam;

        /* Initialize global section name descriptor */
        secnam.dsc64$w_mbo = 1;
        secnam.dsc64$l_mbmo = -1;
        secnam.dsc64$q_length = sizeof(secnam_text)-1;
        secnam.dsc64$b_dtype = DSC64$K_DTYPE_T;
        secnam.dsc64$b_class = DSC64$K_CLASS_S;
        secnam.dsc64$pq_pointer = secnam_text;

        /* Include RAD number in the global section name */
        secnam_text[sizeof(secnam_text)-2] += rad;

The global section name is initially set to "rad_crmpsc_0". The global 
section name descriptor is declared as a 64-bit string descriptor. The 
global section name is modified to include the RAD number by adding the 
RAD input parameter to the '0' character in the string. Mappers of the 
global section can map to the same global section by specifying the 
same name with this RAD based naming scheme.

        /*
        ** Create huge region where we can share page tables with other
        ** processes that map to this same global section.
        */
        region_length = 64;
        region_length *= 1024*1024*1024;
        status = sys$create_region_64 (
            region_length,       	/* 64gb        */
            0,                   	/* Region prot */
            VA$M_SHARED_PTS,     	/* Flags       */
            &region_id,          	/* Region ID   */
            &region_va,          	/* Region VA   */
            &region_length       	/* Region length */
        );

We create a huge virtual region in 64-bit program address space, P2 
space. The flag VA$M_SHARED_PTS is used so that all processes that map 
to the global section can use shared page tables. We calculate the size 
of the region using compiled constants. You may want to calculate the 
region size by obtaining the system's page size and the number of pages 
mapped by a page table page to ensure that the region size is a proper 
length for sharing page tables.

        /* Create the global section */
        if (get_max_rads() == 1)
	
            /* If only one RAD, don't supply RAD hint */
            status = sys$crmpsc_gdzro_64 (
		&secnam,                /* Section name */
               	0,                      /* Ident        */
               	0,                      /* Protection   */
               	mres_length,            /* Length       */
               	&region_id,             /* Region ID    */
               	0,                      /* Offset       */
               	0,                      /* Access mode  */
               	SEC$M_SYSGBL|SEC$M_EXPREG,
               	&start_va,              /* Return VA    */
               	&section_length         /* Return Length */
            );

If the system only has one RAD, we call sys$crmpsc_gdzro_64 without the
RAD_HINT flag and RAD_MASK argument. This allows the program to run on 
systems prior to OpenVMS Alpha Version V7.2-1H1 that do not support the 
RAD_HINT flag. 

            /* If more than one RAD, supply RAD hint */
            status = sys$crmpsc_gdzro_64 (
                &secnam,                /* Section name */
                0,                      /* Ident        */
                0,                      /* Protection   */
                mres_length,            /* Length       */
                &region_id,             /* Region ID    */
                0,                      /* Offset       */
                0,                      /* Access mode  */
                SEC$M_SYSGBL|SEC$M_EXPREG|SEC$M_RAD_HINT,
                &start_va,              /* Return VA    */
                &section_length,        /* Return Length */
                0,                      /* Start VA     */
                0,                      /* Map length   */
                0,                      /* Reserved length */
                1<<rad                  /* RAD mask     */
            );

If the system only has more than one RAD, we call sys$crmpsc_gdzro_64 
with the RAD_HINT flag and the RAD_MASK argument. The RAD_MASK argument 
is a mask of RADs which memory should be allocated from. Each bit 
represents a RAD ID. 

If more than one RAD is specified, an error is returned. Future 
versions of OpenVMS may support specifying more than one RAD in the 
RAD_MASK argument.

	$ run rad_crmpsc
	%SYSTEM-E-NOMEMRESID, requires rights identifier VMS$MEM_RESIDENT_USER
	...
	UAF> grant/id vms$mem_resident_user 'user'

One common reason for sys$crmpsc_gdzro_64 to return an error is if the 
process does not have the vms$mem_resident_user rights identifier. You 
grant this identifier using authorize.

	$ set proc/rad=home=3
	$ run rad_crmpsc
	%SYSTEM-E-BADRAD, bad RAD specified

If the system does not contain memory in the RAD specified, the error 
status SS$_BADRAD is returned. 

Pages are allocated for the global section when each page is first 
accessed. If the specified RAD does not contain any free memory, a page 
is allocated from other RAD. Specifying RAD_HINT and RAD_MASK is just a 
hint. No guarantee is made that all memory for the global section will 
be allocated from the specified RAD.

5.2 rad_crmpsc main() 

The routine main() of the rad_crmpsc program creates a memory resident 
global section on the process's home RAD and loops writing to the 
global section.

        /* Get our process's home RAD */
        home_rad = get_home_rad();

The routine main() calls the rad_routines function get_home_rad() to 
get the home RAD of the process.

        /* Create an 8MB global section */
        mres_length = 8*1024*1024;
        status = create_mres (home_rad, mres_length, &mres_va);
        if (!(status&1)) return (status);

The local variable mres_length is the size of the global section to 
create. The value 8MB is calculated using compiled constants. This 
value must be a multiple of the Alpha page size. We call the routine 
create_mres() to create the 8MB global section in the process's home 
RAD.  If an error is returned, the main program returns the error 
status.

        /* Loop writing to the global section periodically */
        ptr = mres_va;
        while (1)
        {
	    /* Read and write global section */
            *ptr = *ptr+1;

	    /* Update pointer. If we're above VA range, start at beginning. */
            ptr = ptr+64;
            if ((unsigned __int64)ptr >= 
	       ((unsigned __int64)ptr + mres_length))
        	ptr = mres_va;

	    /* Wait for one second */
            sleep (1);
        }

The routine main() loops forever reading and writing the global section and 
waiting. This allows the process to remain active so you can observe it. 

	$ analyze/system
	SDA> show summary/image/user='you'
	SDA> set process/id='n'

You can log into another window on the system and run SDA. You can set 
your process context to the processing running rad_crmpsc.

	SDA> show process/rde

The 'show process/rde' command shows you the addresses used for the virtual 
region in P2 space.

	SDA> show gsd

The 'show gsd' command shows you the global sections created on the system.

6 RAD_CREPRC

The source file rad_creprc.c contains one support function, create_mres(), and 
a main routine. It calls two functions from rad_routines.c, get_home_rad() 
and get_max_rads().

6.1 create_process()

The routine create_process() creates a detached process on the 
specified RAD. The detached process runs the rad_crmpsc.exe program.

        /* Image to run is loginout */
        $DESCRIPTOR (image,"sys$system:loginout.exe");

A 32-bit string descriptor is set up using the $DESCRIPTOR macro 
included in descrip.h. The local variable, image, contains the image 
name for the detached process to run. Since we want the process to log 
in, we specify loginout.exe.

        /* Created process invokes the rad_crmpsc command procedure */
        $DESCRIPTOR (input,"rad_crmpsc.com");

The local variable, input, contains the command procedure to use for 
the new process's sys$input logical name. In the file rad_crmpsc.com is 
one line: 

	$ run rad_crmpsc.exe

The new process just runs the rad_crmpsc.exe program described above. 
You can use a more complex command procedure with more command lines.

	/* Declare process name descriptor */
        char prcnam_text[] =  "rad_crmpsc_0";
        struct dsc$descriptor_s prcnam;

        /* Initialize process name descriptor */
        prcnam.dsc$w_length = sizeof(prcnam_text)-1;
        prcnam.dsc$b_dtype = DSC$K_DTYPE_T;
        prcnam.dsc$b_class = DSC$K_CLASS_S;
        prcnam.dsc$a_pointer = prcnam_text;

        /* Include RAD number in the process name */
        prcnam_text[sizeof(prcnam_text)-2] += rad;

The process name is initially set to "rad_crmpsc_0". The process name 
descriptor is declared as a 32-bit string descriptor. The process
name is modified to include the RAD number by adding the RAD input 
parameter to the '0' character in the string. When you look for this 
process executing in the system, you will notice this name. Because the 
RAD number is embedded in the name, you will know which RAD it's home 
is supposed to be. 

        /* Create process on specified RAD */
        if (get_max_rads() == 1)

            /* If only one RAD, don't supply home RAD argument */
            status = sys$creprc (&pid,
                        &image,         /* image  */
                        &input,         /* input  */
                        &output,        /* output */
                        &error,         /* error  */
                        0,              /* prvadr */
                        0,              /* quota  */
                        &prcnam,        /* prcnam */
                        4,              /* baspri */
                        0,              /* uic    */
                        0,              /* mbxunt */
                        PRC$M_DETACH    /* stsflg */
            );

If the system only has one RAD, we call sys$creprc without the HOME_RAD 
flag and HOME_RAD argument. This allows the program to run on systems 
prior to OpenVMS Alpha Version V7.2-1H1 that do not support the 
HOME_RAD flag. 

            /* If more than one RAD, specify home RAD */
            status = sys$creprc (&pid,
                        &image,         /* image  */
                        &input,         /* input  */
                        &output,        /* output */
                        &error,         /* error  */
                        0,              /* prvadr */
                        0,              /* quota  */
                        &prcnam,        /* prcnam */
                        4,              /* baspri */
                        0,              /* uic    */
                        0,              /* mbxunt */
                        PRC$M_DETACH|PRC$M_HOME_RAD,
                        0,              /* itmlst */
                        0,              /* node   */
                        rad             /* home rad */
            );

If the system only has more than one RAD, we call sys$creprc with 
the HOME_RAD flag and the HOME_RAD argument. 

If the specified RAD does not contain memory or a potential CPU, the 
error status SS$_BADRAD is returned. Only one resource is required, 
some memory or a potential CPU. A potential CPU does not have to be 
active in the system. Therefore, it wise to check for active CPUs 
within the RAD before creating a process on that RAD.

6.2 rad_creprc main() 

The routine main() of the rad_creprc program creates a detached process 
on every RAD that contains both memory and at least one active CPU.

        /* Determine the maximum number of RADs on this system */
        max_rads = get_max_rads();

The routine main() calls the rad_routines function get_max_rads() to 
get the maximum number of RADs on the system. The return value is 
stored in a local variable and is used throughout the program.

        /* Get RAD/MEM info */
        mem_array = malloc (max_rads*sizeof(int));
	...
        status = get_rad_mem (mem_array,max_rads*sizeof(int));

The routine main() allocates some memory to set up for the call to
get_rad_mem(). The rad_routines funtion get_rad_mem() fills in the 
memory array with the number of pages of process private memory in each 
RAD.

        /* Get RAD/CPU info */
        cpu_array = malloc (max_rads*sizeof(int));
	...
        status = get_rad_cpus (cpu_array,max_rads*sizeof(int));

The routine main() allocates some memory to get up for the call to 
get_rad_cpus(). The rad_routines function get_rad_cpus() fill in the 
CPU array with the number of active CPUs in each RAD.

        /* Create a process on each RAD with CPUs and memory */
        for (rad=0; rad<max_rads; rad++)
        {
            if (mem_array[rad] && cpu_array[rad])
            {
                printf ("Creating process on RAD %d\n",rad);
                status = create_process (rad);
                if (!(status&1)) return (status);
            }
        }
 
The routine main() calls create_process() for each RAD that contains 
both memory and active CPUs. We want to ensure that the process has at 
least one active CPU to be scheduled on. We also want to ensure that 
the process has some memory in which to create the global section. We 
could have checked to ensure a minimum amount of memory in the RAD 
before creating the process.

Once rad_creprc finishes execution, you can look at the running system 
and see the processes executing. 

	$ show system/process=rad*
	...
	00000439 rad_crmpsc_0    HIB      6  47   0 00:00:01.98       475 
	0000043A rad_crmpsc_1    HIB      6  48   0 00:00:01.21       475
	0000043B rad_crmpsc_2    HIB      6  48   0 00:00:02.04       475 
	0000043C rad_crmpsc_3    HIB      5  48   0 00:00:01.76       475 

The SHOW SYSTEM DCL command shows you that the processes successfully 
started.

	$ radcheck :== $sys$test:radcheck
	$ radcheck -nosystem -noglobal -process 43c

	Process pages for process 43c:   (472 pages in 4 RADs)
	 RAD       Total      Private      Galaxy Shared
	  0        14 (  3%)       14              0
	  1        15 (  3%)       15              0
	  2        18 (  4%)       18              0
	  3       425 ( 90%)      425              0

	Home RAD for this process is 3

You can run the radcheck program in sys$test to see how much memory is 
mapped in each RAD by a process running rad_crmpsc. You can observe 
that 90% of the process's memory is from the process's home RAD, 3. The 
process maps some memory in the other 3 RADs. This memory is global 
section memory for installed images. Global sections created without 
specifying RAD_HINT use memory from all RADs.

	$ accounting/user='you'/since='time'

If a detached process encounters an error, the error status value is 
recorded in the accounting file. You can run the account utility to see 
the final status returned from the detached processes. 
