This area contains a good many tools which allow implementation of VMS
functions under Unix. Everything has sources in it, though due to the
many variations in Unix, some work is likely to be able to use any of
them in a particular Unixoid OS.

See the README.* files for some further top level info.
 
Dear UNIX world,

	this article will give you a short introduction on the product in this
	fileset. These programms are part of BOSS (batch on- and offline system
	for SAPHIR) which is one component of the SAPHIR (spectometer arrange-
	ment for photon induced reaction) experiment at the physical institut
	of the university of Bonn, Germany. Perhaps some of you can make
	use of it. If you use and/or improve it, please tell me so that we 
	can inform you on changes:
		saphadm@boss1.physik.uni-bonn.de		InterNet
	This is distribution version 1.0 (watch out for the zero: not so
	distributale yet).

Good luck

	Jochen Manns

--
-----------------------------------------------------------------
| Jochen Manns            | 0228/733608                         |        
| Universitaet Bonn       |                                     |
| Physikalisches Institut | manns@boss1.physik.uni-bonn.de      |
| Nussallee 12            | pib1::manns                         |
| 5300 Bonn 1             |                                     |
| Deutschland             |                                     |
-----------------------------------------------------------------
--

1. State of BOSS

   Now everything placed here is really in use i.e. functional. Some things as
   the Motif/X11 interface to the parallel dispatcher are still under develop.
   I hope that all primary work - that is beside support - will be done until
   end of september 1991. The problem for you is the internal state. Up to
   now we had only the time to do the things on a basic level. Some things have
   extremly large power of extensibility (especially DCL where all basic works
   as symbols, expressions are done but many commands as IF, CC etc. are
   missing) but are only frames with the capabilities we really need. There is
   only few documentation and the existing one is in german. Installation
   procedures are only for our machines (Data General AViiON 300, DG/UX 4.32).

   One of the major problems for you is that UNIX is not a standards but is
   standards (as many as there are manufactorers) and that BOSS merges C
   with FORTRAN. There are standards, too...

2. Target people

2.1 Users
  
    Some of them will have worked with VMS for some time and like to keep
    some utilities especially the command language interpreter. But most
    of them - as in our case - will make use of the queue capabilities of
    BOSS from UNIX and/or VMS to submit DCL command files to do their work.
    Development is done under VMS (LSE, PCA, CMS, SCA and other tools) and
    working jobs are submitted to UNIX (RISC machines have more power for
    each dollar you pay). There is no need to learn many things since DCL
    is valid under BOSS. In homegenous UNIX environment (we have four AViiONs
    with 17 MIPS each) you can make simple (!) use of the full power of ALL
    machines with the parallel facilities of BOSS. Those support loosly
    couple parallel jobs (processing time/transfer time per event is in the
    order of 0.1s/4ms or 25) in a homogenous environment.

2.2 Programms

    So as you've seen from 1. there is some work to do to make BOSS 
    distributable. This version 1.0 is our local version and I would like to
    have some people to help me to make it useable for all. For specific
    problems please see README.files. In general portation should be possible
    but you'wll need UNIX V.3, STREAMS (adaption to V.4 ?), RPC and something
    to access process information from root processes (parent PID, CPU time
    etc.). NFS will help in a multiprocessing environment. If you will make
    full use of the connection from VMS to the UNIX queues you will need at
    least a socket based TCP/IP under VMS - we use the VAS/ULTRIX connection
    software UCX 1.3 and RPC (contact me if you have sockets and want to
    use RPC under VMS).

    In this sense I'll hope that there will be a full distributabe BOSS 2.0
    some time.

3. About BOSS

3.1 Basic products
   
    The baseline of all the DCL stuff is a LIB$TPARSE(VMS) surrogate. This
    parser works somewhat like LEX with predefined tokens (HEX, SYMBOL).
    Beside a RPC server for VAX style logical names (tables and access
    restrictions are supported) allows UNIX to make use of VMS file syntax.
    Top of these two products sits the command language definition CLD
    which is a large (!) subset of the VMS thing. Even expressions are
    supplied. A set of library routines help to connect those C programs
    to users and FORTRAN.

3.2 DCL shell
    
    The shell supports symbols, logical names, command files, expressions
    and full image activation via command tables. A special tools allows
    commands to be scripts of other shells which allows easy administration
    of those files. Some lexical functions (F$EXTRACT) and a user interface
    to symbols and logical names are provided.

3.3 QMan queue management system

    Provides VAX like queues and a CLD user's interface to start (SUBMIT) and
    control (SET ENTRY) jobs and queues. Many (!) things are adapted from 
    VAX/VMS. In a multiprocessing environment static loadbalancing - static
    means the load is checked once and not periodic as e.g. for Condor -
    is supported so that ALL enclosed systems can be fully used without
    specific user interaction (so called generic queues). All control
    interactions are done using RPC so that the user's interface QMan could
    be coded to run under UNIX and VAX/VMS. Job control with logfiles etc.
    is supported as VMS does (see the GERMAN documentation in queue/doc).

3.4 Parallel jobs

    The queue management is able to run loosly coupled parallel jobs in the
    following sense: you construct a metafile describing your datastreams.
    In the simplest case there is some input source (a tape), a analysing
    program (which could be run for each "event/record" seperately i.e. 
    needs no interevent informations) and a output file (e.g. histograms).
    The input stream can be splitted and as many analysers can be run as
    there are free system resources (e.g. CPU time) on ONE of MANY machines.
    If more than one are running output has to be collected in the final
    state. So a user can submit ONE job which will make simultanous use of 
    the CPU power of MANY machines (in our case now 68 MIPS easy scalable
    be processor power). The datastreams may be as complicated as you need
    them (split, circles (be aware of deadlocks), etc.). The only facility
    that is used is the HOMOGENOUS machine environment and the LOOSE couple
    channels (transfer time for a 2 KByte event is in the order of 2 milli-
    seconds).

