From: CSBVAX::MRGATE!@KL.SRI.Com:info-vax-request@kl.sri.com@SMTP 21-SEP-1987 18:48 To: EVERHART Subj: Re: interprocess communication without privs Received: from ucbvax.Berkeley.EDU by KL.SRI.COM with TCP; Mon 21 Sep 87 07:47:41-PDT Received: by ucbvax.Berkeley.EDU (5.58/1.27) id AA07145; Mon, 21 Sep 87 07:35:17 PDT Received: from USENET by ucbvax.Berkeley.EDU with netnews for info-vax@kl.sri.com (info-vax@kl.sri.com) (contact usenet@ucbvax.Berkeley.EDU if you have questions) Date: 21 Sep 87 11:19:10 GMT From: ucsdhub!jack!man!crash!jeh@sdcsvax.ucsd.edu (Jamie Hanrahan) Organization: Simpact Associates, San Diego CA Subject: Re: interprocess communication without privs Message-Id: <1746@crash.CTS.COM> References: <12336250283.21.AWALKER@RED.RUTGERS.EDU> Sender: info-vax-request@kl.sri.com To: info-vax@kl.sri.com A previously-posted summary of the standard interproc. comm. techniques (can't seem to find the article, or I'd run this as a followup thereto) was quite good. I can only add two things: First, global sections alone are not enough -- you also need an interproc. synchronization technique to control access to the section. This can be common event flags (limited to procs within a UIC group), hiber/wake, or the lock manager. Which is more suitable depends on the function you want. Second, references to "shared memory" only apply to multiple 780s with MA780 multiport memory. These are 780s running individual copies of VMS, each with its own private memory, but with a shared memory area for fast sharing of data between systems. Global sections, common event flag clusters, and mailboxes can all be created in the shared memory. I mention this because many people think that "shared memory" is synonymous with "global section" -- it isn't. 780s with MA780s are rare beasts these days and I wouldn't spend a minute writing code to accomodate them. BUT, all is not lost. If you can ask that the users of the program have NETMBX, you can do generalized (multi-group-UIC) interprocess comm. with no others privs, by using DECnet task-to-task communication. This will naturally be a bit slow (I've measured it at 10 msec per $QIO call on a 1-MIP VAX; this is for reader and writer processes on the same node -- naturally it gets worse when there's a real internode link involved), but it will work, and it's also very general -- ie processes running on remote nodes use exactly the same code to talk to the "master" node as those running on the "master". The best way to use DECnet this way is by running a "server" process that keeps track of a database that's private to itself. All other processes connect to the server and send it messages to request info from, or to write into, the database. This neatly sidesteps all synchronization issues (since the requests to the server process will be single-threaded). Some cooperation from the system manager on the node that will run the server is required. But, the server process (and the account under which it runs) needs only normal privs. If you can get PRMMBX and SYSNAM privs, you can speed things up on the node that runs the server by letting them talk to the server through mailboxes. Only the server need have these -- once it starts up, it creates the mailbox from which it will read commands; other procs on the same node create a temporary mailbox for reading responses from the server, and send the temp mailbox's physical device name (or something that can be mapped thereto) in all requests to the server. ("here's a request; send the reply *here*.") Note to system analysts: This is a good model for any application that needs a common database accessed by multiple "clients". The clients need know nothing about how the database is set up; they only need know where and how to send messages. Transaction logging is simple -- just copy all the incoming request messages to a file. (Then if the database is munged, just restore it from the most recent backup and play all the request messages that came in since that backup into the server's mailbox again...) You can change the database implementation without changing the clients, too. Students of VAX/VMS will recognize this model in the job controller, among other places. Good luck! --- Jamie Hanrahan Simpact Associates, San Diego, CA pnet01!jeh@crash.CTS.COM or jeh@pnet01.CTS.COM ...sdcsvax!crash!pnet01!jeh