From: CSBVAX::MRGATE!@KL.SRI.Com:info-vax-request@kl.sri.com@SMTP 4-NOV-1987 00:01 To: EVERHART Subj: Re: Code needed to obtain system metrics Received: from ucbvax.Berkeley.EDU by KL.SRI.COM with TCP; Mon 2 Nov 87 19:52:16-PST Received: by ucbvax.Berkeley.EDU (5.58/1.27) id AA02609; Mon, 2 Nov 87 19:36:48 PST Received: from USENET by ucbvax.Berkeley.EDU with netnews for info-vax@kl.sri.com (info-vax@kl.sri.com) (contact usenet@ucbvax.Berkeley.EDU if you have questions) Date: 2 Nov 87 22:02:23 GMT From: ucsdhub!jack!man!crash!jeh@sdcsvax.ucsd.edu (Jamie Hanrahan) Organization: Simpact Associates, San Diego, CA Subject: Re: Code needed to obtain system metrics Message-Id: <1946@crash.CTS.COM> References: <8710302027.AA03313@ucbvax.Berkeley.EDU> Sender: info-vax-request@kl.sri.com To: info-vax@kl.sri.com In article <8710302027.AA03313@ucbvax.Berkeley.EDU> RAND@merrimack.EDU ("Rand P. Hall") writes [paraphrased]: > How do I get [various page and swap rate data] from VMS? VMS keeps most all performance data as longword counts of events. You sample the count and note the time; sometime later you sample the count and note the time again, subtract old count from new count, and divide by the elapsed time to get the rate. The time is in VMS 64-bit binary time format (100-nanosecond ticks), so quadword subtraction (SUBL followed by SBWC) and the EDIV instruction are recommended to determine the elapsed time. Here's some code excerpted from a rather fancy ReGIS color performance monitor I wrote some time ago. (No, I don't want to ship it over the net unless there's a lot of requests for it; it's about 110K bytes of source. I only mention it to establish that this code actually works! The program may appear on the next Symposium tape.) It must be executed in kernel mode due to the DSBINT/ENBINT calls. Note that I have changed the target operands for the MOVs to metavariables. All of them should be longwords except the one for the data collection time. DSBINT #IPL$_SYNCH ; ensure no changes during ; data coll. MOVQ G^EXE$GQ_SYSTIME, sample_time ; get current time MOVL G^PMS$GL_FAULTS, total_faults MOVL G^PMS$GL_PREADIO, hard_faults ; this is actually "the number ; of page faults resulting in ; page reads". This is approx- ; imately equal to the number ; of reads from disk due to ; page faults -- the exceptions ; will be if the page fault ; cluster on disk is split ; across extents. Note that ; the number of pages read in ; will be greater than one per ; fault due to page fault ; clustering. ; system pagefaults are tricky. There is a process header for something ; called the "system process". The system process is not really a ; process (in that it's never scheduled), but its data structures are ; a convenient place to put things. For instance, the system and ; global page tables are in fact the P0 page table of the system ; process. And, just as pagefaults for real processes are counted in ; their respective PHDs, pagefaults to system space are counted in ; the "system process's" PHD! ; get number of system page faults from system "process" header MOVAL G^MMG$AL_SYSPCB, R4 ; R4 -> system PCB MOVL PCB$L_PHD(R4), R4 ; R4 -> system PHD MOVL PHD$L_PAGEFLTS(R4), faults_to_system_space ; get pf count MOVL G^SWP$GL_ISWPCNT, inswaps_performed ENBINT There you go. Reduction of the data is left as an exercise for the reader (half :-); can anyone recommend a good short-term averaging method for this sort of thing?). Let me know if you want any others. --- Jamie Hanrahan jeh@crash.cts.com ; sdcsvax!crash!jeh