From: SMTP%"DECUS-Info-owner@DECUS.Org" 1-APR-1994 17:58:29.30 To: EVERHART CC: Subj: DECUS '94 JOURNAL Date: Fri, 01 Apr 1994 11:35:34 -0500 (EST) From: DECUS_NEWS@DECUS.Org Subject: DECUS '94 JOURNAL To: decus-info-list@decus.org Errors-to: DECUS-Info-owner@DECUS.Org Warnings-to: DECUS-Info-owner@DECUS.Org Resent-message-id: <01HANQX3S7LE8WVYN9@DECUS.Org> Message-id: <01HANQVXKB5E8WW09V@DECUS.Org> Organization: Digital Equipment Computer Users Society X-VMS-To: IN%"decus-info-list@decus.org" X-VMS-Cc: DECUS_NEWS MIME-version: 1.0 Content-type: TEXT/PLAIN; CHARSET=US-ASCII Content-transfer-encoding: 7BIT Comments: Send DECUS-Info-list subscribe/unsubscribe requests to mailserv@DECUS.Org DECUS '94 The Journal of the DECUS U.S. Chapter -------------------------------------------------------- Spring 1993 Volume 2 Number 1 Table of Contents: Mobile Data Networking: The Need for a Systems Perspective Tom Ermolovich Virtual Networks Jon Blunt The Evolution of the Microsoft Desktop The 5-Minute Interview on DECUServe Laurie Maytrott Distributed System Design and Object-Oriented Thinking David Frydenlund Is Digital Killing VMS? Chris Summerfield, Phil Auberg, Brian Breton OpenVMS AXP Leaps to Version 6.1 Tim Ellison, Jody Little, Mary Jane Vazquez Setting DECnet-VAX Executor Pipeline Quota Clyde Smith Accessing Digital Information on the Internet The 5-Minute Interview on GroupWare Dennis Roberson "I'll Mail It to You" Robert Tinkelman President's Column Marg Knox & Tom McIntyre Statement of Direction -------------------------------------------------------- Articles may be reprinted from the DECUS '94 Journal with the following statement attached - "This article is a reprint of an article which originally appeared in the Spring 1994 DECUS '94 Journal, and is reprinted with permission. For more information about the DECUS Journal and the Digital Equipment Computer Users Society (DECUS), please call Customer Service at 1-800-DECUS55." -------------------------------------------------------- Geri Goeransson Content Editor Article Editors Crista Allen Mary Margaret McCormick Deb Driesman Larry E. Snyder Marian Iannuzzi Tom Williams Phil Loftis Mike Metzler Digital Counterpart Published by the membership of the DECUS US Chapter -------------------------------------------------------- Digital Equipment Computer Users Society (DECUS) Office 334 South Street SHR3-1/T25 Shrewsbury, MA 01545-4195 1-800-DECUS55 1-508-841-3357(FAX) information@decus.org (internet) -------------------------------------------------------- Electronic Version of the DECUS Journal With each issue, it get more and more difficult to include each article in its entirety and to print all the articles submitted. To overcome these problems, an electronic copy of the DECUS '94 Journal is being generated. The DECUS '94 Journal is available via Gopher at Gopher.DECUS.Org. It is also available via Anonymous FTP at FTP.DECUS.Org and MAILSERV@DECUS.ORG as file [PUBS]DECUS94-JOURNAL.TXT Each issue of the Journal will also be sent to the DECUS-INFO-LIST mailing list at the time it is mailed in harcopy form. To subscribe to the DECUS-INFO-LIST, send an E-mail with the words "SUBSCRIBE DECUS-INFO-LIST" to MAILSERV@DECUS.ORG. Plans are in the works to make available copies of the Journal in both postscript and ASCII text format. The postscript versions should be be available by mid-April. Mobile Data Networking: The Need for A Systems Perspective Tom Ermolovich "Mobile computing is changing the whole computer industry. It's a revolution that's likely to be viewed as more important than the personal computer in the '80s, "says Judy Hamilton, President and CEO, Dataquest. One of the key technologies enabling this revolution is mobile data networking. While much work has been done in the industry on mobile data networking on solutions such as wireless LANs and packet radio data networks, the work of integrating these solutions to the end user's Application Network is just beginning. For this integration to be complete, a systems perspective needs to be taken. The output of the perspective is a systems definition and a set of components to form the basis of a complete solution for the end user. For the mobile computing revolution to happen, end users will need a complete solution. Definition of Mobile and Wireless Data Networking A mobile data networking system will consist of both mobile and wireless components. In developing a systems perspective of mobile data networks, it is necessary to understand the difference between mobile data networking and wireless data networking. These concepts are often confused. Worse yet, at times these terms are used in an interchangeable manner. The terms 'mobile' and 'wireless' are not synonymous. Wireless is a transmission method. It can provide network connections to: o permanent computer installations o temporary computer installations o mobile computing devices. Mobile data networking provides connections to portable computing devices that can be used in multiple places. The connections to these portable devices can be made through either a wired port or through a wireless connection. When these computers move away from their home location, they pop up at random places and are connected back to the home site by a variety of technologies. The most obvious example of a portable computing device is today's notebook computer with an integral telephone modem. Users are carrying these devices anywhere they go and they are connecting them back to their home system by way of wired modem technology. While mobile and wireless are truly different concepts, it shouldn't be forgotten that wireless is a significant enabler to the mobile computing industry. When combined together they can provide untethered networking connectivity to mobile devices. Evolution of Mobile Computing Some of the mobile data networking systems issues are rooted in the evolution of computing paradigms occurring since the 1960s. Since then, there have been 3 major paradigm shifts: o 1960s - Computer Room o 1970s - Terminal Room o 1980s - Desktop o 1990s - Mobile These shifts can be characterized by the location of the computing devices, the interconnect to the user and the computing input device. This characterization is summarized in Figure 1.0 . ================================================================ Figure 1.0 Evolution of Computing Styles Time Computing Interconnect Input Frame Location to User Device 60s Computer Room Sneaker Peripherals 70s Back Room Wired Terminal 80s Desktop Wired PC 90s Front line Wireless Personal Device ================================================================ The main capability distinguishing mobile computing from its predecessors is the ability to deliver computing anywhere and anytime the user wants it. Mobile computing adds an element of convenience. Mobile computing can also change the way an enterprise does its business. This concept has been described as Front Line Information Technology (IT) because computing can be delivered to the Front Line where the enterprise meets its customers. One example of Front Line IT is taxi cab dispatch by packet radio. The computerized dispatch done over packet radio results in an optimum dispatch of taxis not possible with CB radio. For the taxi cab company, getting taxis to its customers in the most timely way is the 'Front Line' of that business. The mobile paradigm, differs from its predecessors in two significant ways. First, computing devices now freely roam throughout the network. Second, the interconnect to these devices, may be wireless. The issues raised by these differences has to be addressed by networking hardware and software. This is discussed in more detail later in the article. Mobile and Wireless Spatial Maps Mobile data networking consists of a wide array of technologies and applications. The aim of the systems perspective is to have these issues addressed by the Application Network. One way to characterize these technologies and applications is by the development of a series of spatial maps. The purpose of this map is to provide a focus for a systems definition. A basic spatial map is shown in Figure 2.0 . ================================================================ Figure 2.0 Mobile and Wireless Spatial Map | Worldwide | Continental | Geographic Regional | Coverage Metropolitan | Campus | Site | Building | Floor | Room | Degree of Mobility -----------------------------------------------> Fixed Stationary Moving ================================================================ The map is derived by plotting geographic coverage on the vertical axis versus degree of mobility on the horizontal axis. Geographic coverage is defined in terms of the area covered by the transmission technology. It ranges form a single room to worldwide. The degree of mobility is defined by how often and how fast the device moves. Devices that are fixed never move at all once they are installed. An example would be a process control device. Devices classified as moving may be moving while in use. An example here could be a packet radio dispatched taxi cab. Falling in the middle, between fixed and moving is stationary. These devices stay in one place while in use but move often from location to location in between uses. An example of a stationary device is an EKG machine in a hospital. A map of Communication Services and Styles is shown in Figure 3.0 . ================================================================ Figure 3.0 Communication Services and Styles | Worldwide | | Continental | Wired POTS Iridium | Regional | | Metropolitan | | Campus | Cellular Mobitex, Ardis | GSM, CDPD Site | | Building | | Floor | | Wireless LANs Room | -----------------------------------------------> Fixed Stationary Moving ================================================================ A map of Applications and Styles is shown in Figure 4.0 . ================================================================ Figure 4.0 Applications and Styles | Worldwide | | Traveling Executive Continental | | Regional | Insurance Adjuster | Metropolitan | Taxicabs | Vending Machine Campus | | Site | Student | Building | | Hospital Nurse Floor | | EKG Machine Forklift Room | -----------------------------------------------> Fixed Stationary Moving ================================================================ These maps show there are a wide array of technologies and applications to be supported by a mobile data network. Elements of a Mobile Data Network The final step in developing a systems perspective is to develop a model showing the elements of a mobile data network. The model will be used to develop the system design issues that need to be addressed. The two major components of the model are the Provider Network and Application Network. Typically, these two are separate but they could be a single network. Some examples of Provider Networks are Mobitex, ARDIS, CDPD, Telco networks, X25 networks and Cellular Phone networks. These networks are usually owned and operated by service providers and usually charge end users both a subscription fee and service fees based on usage by time and by volume of data. On the other hand, the Application Network is usually owned by the end user. It consists of a network infrastructure and hosts running user applications. The network infrastructrue is built from LANs, bridges, and switching devices usually either routers or switches. The Application Network runs networking protocols such as TCP/IP, OSI, DECnet or IPX. Further examination of the model reveals a great deal of commonality between the two networks. Both networks contain: o switches interconnected by cable, fiber or wireless links o mobile device servers used to track the movement of devices in the network o access points providing wireless connectivity to mobile devices o Network Adapters (NA) making connections to the wireless air interface. The two networks are connected by some sort of gateway device such as a terminal server, an X25 gateway, router, etc. The functional aspects of the model can be shown by a couple of examples: o Taxi Cab - In this example, a taxi cab is dispatched through a Mobitex packet radio network. The Provider Network in this example is the Mobitex network. The taxi cab contains a special terminal with an integral Network Adapter to the Mobitex air interface. The terminal in the taxi cab communicates with an Access point to the Mobitex network. In the case of Mobitex, this is a Base Station. The Mobitex network contains both switches and a Mobile Device Server to provide tracking of mobile devices throughout the Mobitex network. Connection to the Application Network is made by a gateway. In the case of Mobitex, this is a device on the end user's premises providing a gateway function between Mobitex protocols and those used in the Application Network. Lastly, the Application Network contains a host running the taxi cab dispatch application. o Remote Dial-In - In this example, a traveling executive is on the road and wishes to dial-in back to the home office. The executive has a Notebook PC and is using a serial networking protocol such as Serial Line IP (SLIP) to communicate back to the home office. This communication takes place using the Network's Adapter, the Notebook's integral modem. The Provider Network in this case is made up of long distance and local exchange carriers. These carriers provide transparent connectivity. The gateway device in this example consists of a modem and Terminal Server running SLIP. The Terminal Server provides TCP/IP connection to the host for the executive. Both of these examples demonstrate how the developed model provides a general tool for describing the network. This tool is used later to show how system components fit into the network. Systems Design Issues The next step in a system design is to identify design issues that must be overcome with a systems solution. This section defines the issues. The next section, Systems Definition, provides a framework to address these issues. o Roaming Roaming is the transparent movement of networking devices despite the changes in point of access. Roaming is one of the key attributes of a mobile data network. Support for roaming needs to occur in both the Provider Network and in the Application Network. Provider Networks must supply seamless roaming as part of their basic service offering to their customers. To an Application Network, this is a straightforward case. Since the Provider Network most likely will appear as a single subnet, roaming is transparent to the Application Network. Roaming in the Application Network is a bit more complicated. Their early development did not address mobility. Roaming in a LAN is accommodated by the LANs addressing structure. Roaming can also occur in a LAN extended by data link layer bridges running the spanning tree algorithm. Since this algorithm provides for adaptive learning, roaming across bridge boundaries is automatically supported. Difficulties arise when roaming occurs acros LANS connected by Network Layer Routers. Most of these protocols rely on a location-dependent addressing structure. Therefore, to roam across a router boundary, the address of the mobile device must be changed manually by the network administrator. Thus, roaming across a router boundary is not transparent. Wireless Access as opposed to Wired Access The properties of wired access solutions have been understood for a long time and this technology is growing in an evolutionary manner. Bandwidths have increased and error rates have decreased. Wireless access poses a new set of issues to be addressed. These issues include: o reduced bandwidth compared to wired access o high error rates in deep fades o hidden nodes - A can talk to B, B can talk to C, but A can't talk to C o unique security and authentication problems Wireless by its very nature is less secure than wired transmission. Wireless LAN radio transmissions may leak outside the LAN. There is a similar issue with authentication. The need increases when unauthorized access may occur from outside the building. o World-Wide Solutions Since wireless transmission is regulated by local governments around the world, worldwide spectrum allocations don't exist. Even where there are frequency overlaps, the rules for using this frequency vary. o Disconnected Operation Portable devices will be used in locations where the user deems it to be most convenient. In these locations, the user may not be connected to the network by either personal choice or by the lack of wireless coverage. When this happens, the user needs to operate in a disconnected mode of operation by deferring usage of electronic mail, file and database accesses, and printing. Systems Definition A systems perspective of mobile data networks can be developed by describing the hardware and software components of the system and showing how these components address the issues developed by the examination of the Elements of a Mobile Data Network. This perspective, developed from an Application Providers viewpoint, addresses the essential components to be provided by the Application Network. The systems definition can be broken down into component categories. The entire system can be defined with just 4 categories: Wireless LANs, Public Service Access, Transparent Mobile Networking, and Disconnected Operation. While these components are logically separate, some of them may be combined in actual systems. Additionally, end user solutions may range from using a single component to a solution using components from all 4 categories. A description of each of the 4 categories follows. o Wireless LANs This category addresses in-building wireless connectivity. The individual components are access points and network adapters. Access points are essentially a wireless to wired datalink layer bridge. The access point also provides data link roaming and end user authentication. Network adapters provide for connectivity between the computer system bus, most likely either PCMCIA or ISA to the wireless air interface. Network adapters contain: a radio, radio controller, MAC controller and computer bus interface logic. Network adapters will be plugged into portable devices and most will also be plugged into access points. Separate network adapters will have to be developed for different spectrum users around the world. The 2.4GHZ band has the promise of reducing the number of adapters required for a worldwide solution. There is no worldwide standard for using the 2.4GHZ band. A number of variations may be required. Wireless LANs can also be used to connect buildings in a campus or company. This is especially useful when the buildings are separated by a right-of-way. This inter-building connection can be made by simply replacing the Wireless LANs omni-directional antenna with a directional antenna and using an access point as a remote bridge. Note: this mode of operations is permitted by regulation in North America. This mode of operation is more difficult to get approved elsewhere in the world. o Public Service Access These components provide connectivity between the Application Network, i.e., TCP/IP or OSI; and the Provider Network i.e. wide area packet radio service, cellular, satellite, etc. Two software components are needed. One for the mobile client and one for the fixed host. A standard application interface such as NETBIOS or sockets is used to minimize changes to the applications program. Support for multiple services such as Mobitex, ARDIS and CDPD is put beneath these interfaces to allow the user a choice of service, to systems using heterogeneous services and to provide a means where a gateway between these services can be constructed. Additionally, Public Service Access components need to provide security and data compression. Although the use of standard interfaces minimizes the need for changes to the application program, these programs will require some modification to make smart use of the packet radio network. Many application programs today have been developed for client/server operation over a LAN. When building an application over the LAN, frequent communication between the client and the server can simplify the software design. However, if this design is simply moved to a packet radio environment, the user may experience unusually high usage charges. o Transparent Mobile Networking These components provide for roaming across router boundaries by creating a virtual mobile subnet and providing connectivity to and from it. The need for these components comes from the location dependent addressing used by Application Network protocols. These components will also consist of a client part and a server part. The server is known as the Mobile Device Server (MDS). The MDS is the router between the virtual mobile subnet and the fixed network. MDSs keep track of the location of the mobile hosts in the mobile data network. This is accomplished by the mobile client registering with an MDS when the mobile client is in the area assigned to the MDS. MDSs must not only keep track of the arrival of mobile clients, they must also keep track of mobile clients leaving. MDSs inform each other of the arrival or departure of mobile clients. To provide connectivity throughout the computer network, MDSs must also have knowledge of the location of the fixed routers in the network. A client/server pair of components is needed for each Application Network protocol supported, ie, TCP/IP.IPX, etc. This capability is already built into the OSI network protocol. o Disconnected Operation These components address end user application problems occuring when operating a mobile device disconnected from its network server. Generally, the user needs a service or wants to send data to or from the server. Separate components have to be developed for electronic mail, database access, file access and printing. By sophisticated use of caching techniques, these components try to make the user feel as if they are indeed connected to the network. There are three distinct phases of operation for these components. First, while connected to the network, there is a caching of data the user may need if the user were to be disconnected from the network. Second, when disconnected, these components attempt to deliver the best possible service to the user. Third, when reconnected to the network, a reconciliation occurs of databases, files and any deferred services such as printing. To an Application Network provider, the stationary section of the system is the one of most interest since this is where the highest volume of user applications will exists. The Application Network providers will have to supply components in all 4 categories. Conclusion Mobile data networking promises to offer users connectivity to mobile devices anywhere at anytime. To deliver on this promise, users need a complete system developed using a systems perspective. This complete system incorporates the following components: o Wireless LANs to address in-building wireless connectivity o Public Service Access to address connectivity to Provider Networks o Transparent Mobile Networking to address roaming throughout the Application Network o Disconnected Operation for mobile device use when separated from the network. Virtual Networks Jon Blunt There has been a railway between Barcelona, Spain and Marseilles, France for over a century. At the border, the wheels of each carriage have to be swapped as the French and Spanish railway systems have different gauges. For the railway there are plans to build a new track to Barcelona and link it to the French high-speed train system to Paris and through the Channel Tunnel to London. The network manager often doesn't have the option of standardizing on one network. Large organizations are going through the same proliferation of LANs as they did with PCs a decade ago. Very high costs are incurred one small step at a time, as each group networks itself and links into the backbone. The capital cost of the hardware and software is dwarfed by the ongoing cost of network administration. As organizations become more dependent upon their networks, this bottom-up model will give way to a more structured one. The rate of innovation in networks remains very high. Each year brings advances, at least on the order of changes in PC technology. Most organizations will be unable to control this by standardizing on a single vendor or product. Rather they are going to adopt a policy of live and let live where most users are isolated from changes in the network environment. This is going to be essential as internetworking between enterprises becomes more important. Enterprises will want to create end-to-end service where the user is. For example, companies collaborating on an aerospace project need a secure network to share design drawings, simulation data and programs. Or an account manager wants access to the latest product data and pricing models while at a customer site. These services will not be provided by a single technology, but by a carefully managed patchwork of cooperating systems, the virtual network. A Virtual Network provides uniform access to all resources independent of where they reside in the infrastructure. From the application interface, it is irrelevant if a process runs on the client, a local server or at a remote data center. Similarly, an individual communicates with his peers and shares data and documents with them without having to think about what network or segment they use. Lastly, a virtual network is managed as a single network with central administration of all resources. Uniform access doesn't mean completely interchangeable services. Most remote and mobile users don't have the same bandwidth available as those connected through LANs. There are probably security and other concerns limiting access based upon location and privileges. Uniform access means a single interface for application programs and tools. Whether an application or data is accessible and how best to run it is decided dynamically. For example, in the office a geophysicist may run the analysis of seismographs on her workstation, downloading data from a remote server. However, when at home with a more lowly PC, the application software is run on a compute server, and real-time simulation is not available. How these choices are made depends upon the design of the network and the application. Current networks are an uneasy compromise between expediency and architecture. The origins of PC LANs are very far from the typical corporate network environment. As Dr. Johnson, a renowned Englishman of letters, remarked about a dog walking on its hind legs, 'It is not amazing that it does it well but that it does it at all'. The same should be said of first generation PC Networks. Yes, they had no system management, security or administration. True, nodes would drop out for no apparent reason. Yet for the burgeoning users of PCs, these networks have come to define what networking is and the latest versions of NetWare and LAN Manager are now rivaling their more staid cousins for the leading role in the network infrastructure. But unless the organization has grown up with PCs or downsized to them, there is also the need to integrate mainframes and servers of all shapes and sizes. This is even more complicated where there is a large base of UNIX workstations as well. As standalone technologies work group LANs provide a complete set of services for the work group. Internetworking has been managed by bolting on other services to this infrastructure. The result is a hodgepodge of standards and protocols, and an environment that is highly asymmetrical with regard to access to resources outside the local services. First, let's look at what the virtual network has to provide. Single logical address space independent of physical layout Though many existing networks predate it, this is now achievable through OSI and TCP/IP standards. Enterprise directory services Again more common than in the past. It is now possible to centralize the creation of account names and distribute these to directory servers but many organizations have either local creation or product level directory management. Often the cc:Mail directory is managed by a separate group to the Microsoft Mail community. Definable reliability characteristics Are the same postmaster services available to all users? Is archiving a subscription service? What data format translations will the network provide? These are services the infrastructure can provide for all systems and users. Centralized management To provide a guaranteed service level it is necessary that the status of the network can be monitored, problems isolated, and appropriate problem resolution undertaken. As important is update management and data synchronization of applications running on the network. Overall this is a major step beyond the typical network management services available today. Until recently, organizations using networks as business tools have relied upon proprietary networks they designed and managed as a single resource. Now major corporations are able to run mission-critical applications over networks based upon open and de facto industry standards. These standards are going to determine the future of networking. The most important of these are: X.500 Directory The ultimate industrial strength directory service. X.500 is held back by the overhead it entails. Less demanding organizations look at the equivalent services in the Internet and proprietary offerings from E-mail vendors and ask if it is worth the cost. Simple Network Management Protocol (SNMP) Not the last word in network management protocols but it is the most universally available. It is the minimum standard all network equipment should support. It is probably all most companies need. SNMP makes the status of network attached equipment visible to applications such as NetView. It does not encompass system or application management. Distributed Management Environment (DME) / Desktop Management Interface (DMI) These are the tools for remote management of the workstation environment, allowing enterprise administration of distributed resources. DME has been delayed but should be available from major vendors this year. DMI is an initiative from the PC community for enabling management of all the myriad of components and applications finding themselves attached to or running on client systems. DME suffers as a universal tool from its UNIX origins. PCs do not have DME-compliant environments so DMI, the foundation of Hermes from Microsoft, may become the de facto standard for managing the client environment. DMI services includes configuration control, advanced installation support, version control and software licensing schemes. Implementation of DMI depends upon the vendors of each device attached, but industry groups are working to create standard control functions for devices such as printers and modems. The user of DMI protocols for application management depends upon designing the functionality into the application. Initially, this will be done by the programmer, but expect to see DMI control being also added to client/server development environments. DMI only defines how devices and applications can make data available and receive commands. Hermes and similar products provide the mechanism for making this happen for specific sets of data. Open Database Connectivity (ODBC) Developed by Microsoft, this is likely to become the standard for clients to interact with Database servers. ODBC provides a standard mechanism for generating SQL commands to interact with a remote database. This leaves open where to locate the ODBC gateway, on the client, or on the local server. Remote Procedure Call (RPC) RPC allows one application to dynamically invoke another application to perform a task as if it was a linked module. How RPC is implemented depends upon the Application Programming Interface (API) used. The network may take in hand resolving the call using a standard such as CORBA. Common Object Request Broker Architecture (CORBA) This is a standard from the Object Management Group, and has gained endorsement from the major vendors. It is the standard embedded in ObjectBroker from Digital. CORBA enables dynamic access to remote objects. This breaks the need to know on which server or cluster an application or database is mounted. What CORBA adds to simpler RPC mechanisms is the degree to which the service can be customized depending upon the request. For example, the requester may be expecting the response in an old format. The object broker detects this and either resolves the problem or reports the error in an intelligent manner. Object brokers will be needed as complex compound documents are shipped across the networks. The problems are familiar to anyone who regularly uses the Internet for anything other than text E-mail and finds their files unreadable. It is possible to get it right but it shouldn't be so hard. Before being seduced by the possibilities, it is important to look at the scale of the task ahead. The history of the Internet shows if bandwidth is priced very low, growth in demand is exponential as new users join and existing users make use of more services. Corporate networks do not come free but it is reasonable to expect order of magnitude increases in bandwidth with little if any increase in price. This in turn will drive the development of more services in the networks and is going to lead to demand for support for voice and video both broadcast and point-to-point. This points to one of the fundamental decisions facing organizations: how quickly do they need to replace the basic physical infrastructure? Future demand is one driver but so is the unmanageability of most wiring. Getting maintenance costs down requires technologies such as star wiring and collapsed backbones bringing the active parts of the system to a single accessible cabinet. A virtual network hides the underlying structure but it cannot operate if the physical network is down. As companies become more dependent upon the network, it will be engineered for reliability and service not lowest up front cost. There are also aspects of this change beyond the technical domain. For many organizations, centralized management requires a culture change. LANs have provided work groups with both technical and logistical autonomy from central Information Services. The argument for local administration has been flexibility and local support. This is unpredictable; dependent upon dedicated individuals who are overloaded. It is convenient to be able to walk over to the network administrator's desk, but frustrating if he is busy doing his real job. Local autonomy is in conflict with centralization of the routine tasks such as managing software licenses, updating account lists, passwords, and software upgrades. To guarantee service and to reduce cost, these administrative groups will be absorbed into the central management function. These changes will be neither smooth or painless. IS often doesn't have the best reputation for service. The larger transformation is the development of a true service culture in successful IS groups. Technology is enabling the integration of heterogeneous environments, but IS has to rise to the opportunity to provide business as well as technical flexibility. Jon Blunt is a consultant who works with the IS function on architecture and organizational change. He is the founder of TIAC's roundtable group for corporate information architects and has taught courses on information architecture for Digital. The Evolution of the Microsoft Desktop Win32 An API, Application Programming Interface. Based on the Win16 API, but extended to support a 32-bit flat address space, threads, Unicode, and many other advanced features. Windows NT The first operating system that supported the Win32 API. Pegasus Jointly-developed by Microsoft and NCR, the next release of LAN Manager for UNIX. It lets UNIX servers work just like Windows NT servers. (Expected in mid-year 1994) Daytona An imminent maintenance release of Windows NT. (Expected mid-94.) Smaller, faster, several new features. Includes OLE 2.0, Open GL (3-d graphics), Novell NetWare client. Multiple Win16 applications can be run in separate Windows-on-Windows sessions, improving performance. Chicago An imminent major release of Microsoft Windows, replacing Windows 3.1. (Unlike Windows 3.1, and like Windows NT, Chicago completely replaces MS-DOS after bootstrap. It will provide almost the entire Win32 API, missing only the security features, so that 32-bit Windows applications will run under Chicago as well as under Windows NT. Indy A rumored release of Windows NT - providing the Chicago user interface for Windows NT. Cairo A future major release of Windows NT. (Sometime in 1995) Object-oriented file system, many other major changes. DECUS Library has Essential Tools! Get the essential set of popular, useful, can't-do-without programs for OpenVMS AXP and OpenVMS VAX -- all on one CD-ROM from the DECUS Library! The Essential Tools Collection for OpenVMS is easy to install and contains over 100 programs, tools and utilities. The CD-ROM contains both the VAX and Alpha AXP executable images of programs, documentation files and complete sources -- each organized in its own directory. You'll find system management tools; general tools; editors; file archivers, compressers, and encoders; file transfer utilities; GNU utilities; printer symbionts; and more. The Essential Tools CD-ROM (DECUS #VS0174) is available now from the DECUS Library for $100. Order your copy today by calling 1-800-DECUS55. The 5-Minute Interview .... DECUServe Laurie Maytrott Q. What is DECUServe? A. DECUServe is the electronic information exchange and conferencing system operated by the DECUS U.S. Chapter. We like to say, it's "Where DECUS Meets Daily." There is a growing knowledge base of technical information and invaluable user problems and solutions provided by the members. The knowledge base currently boasts 36 technical conferences with almost 14,000 topics. Some of our most popular conferences are those discussing DEC_NETWORKING, HARDWARE_HELP, BUSINESS_PRACTICES, INTERNETWORKING, PATHWORKS, and VMS. DECUServe also offers access to many of the technical Internet-based newsgroups, provides public forums for discussion of the business practices of Digital and other vendors in the Digital marketplace, and serves as a meeting place for peer-to-peer networking with other computer professionals. Efforts are underway to expand online Internet-related information services... a Gopher server is one of our most recent additions. Q. Who subscribes to DECUServe? A. We're pleased to have a wide variety of subscribers including system managers, programmers, consultants, some of Digital's product managers and developers, and some 3rd-party vendor technical support folks. It's great to see active participation by many of those we consider "gurus" in Digital-centered computing. Our subscribers are DECUS members, primarily from the U.S. Chapter, but increasingly from foreign chapters as well due to our Internet accessibility. Q. How does DECUServe differ from the Newsgroups? A. We've gotten a lot of input from our subscribers on this topic. They report a much lower noise to info ratio in the DECUServe knowledge base. They find it easier to pose questions and problems due to the more supportive DECUServe environment with its higher standards of professional and personal courtesy. Our knowledge base is topic-streamed and indexed, helping search/retrieval. Centralizing the information provides a better synchronization of the discussions. All information is archived, so solutions to previous problems are immediately available. Dialup is available, so DECUServe can also be accessed by members who don't have Internet access from their local systems. Q. How does DECUServe help subscribers in their jobs? A. I like to describe the information we have online as a "knowledge base" rather than a "database." On DECUServe, we help each other put it all together. Technical solutions often come from peers who have "been there" and "done it"... solutions span many different manuals, product lines, or platforms. Members and their employers have also seen good benefits in the areas of advocacy and business practices. Member suggestions from online discussions have been used by Digital to make improvements in some customer services. To receive more information on becoming a DECUServe subscriber: DECUServe applications and additional information are available by calling 1-800-DECUS55, or by sending mail to application@eisner.decus.org (No mail text required. Include PS in the mail subject for a PostScript file, LN03 for an LN03 print file, or leave blank to receive a text file). A one-year membership is $75 for U.S. Chapter members. We hope to be able to meet more and more of our DECUS members online! Laurie Maytrott is the systems manager at the Florida Solar Energy Center in Cape Canaveral, a research facility of the University of Central Florida. Laurie is also the Vice Chair of DECUServe and has been a long-time participant in its conferences. Laurie Maytrott can be reached on DECUServe (MAYTROTT) or at maytrott@eisner.decus.org (Internet). Distributed System Design and Object-Oriented Thinking David Frydenlund Distributed computing can be either a powerful paradigm or a route to chaos. The result largely depends on two fundamental factors. The first is the problem space to which the technique is being applied. The second, and most important, is the architecture, or underlying philosophy, that the designers and builders bring to the process. A closer examination of these factors illustrates the power of object-oriented thinking in designing distributed processes. Distributed Computing Parameters Two key parameters control whether a process can be distributed. The first is separability: Can the process be pulled apart into separate modules without losing the design functionality? The second is synchronization: How much time lag between modules can be tolerated before the process fails to perform the desired function? The ideal case for distributed computing is a process that is both modular and asynchronous. That is, the computation can be broken down into independent entities, and the entities are not dependent on time-critical sharing of data. One example of this process type is the aggregation and rectification of an organization's accounting or budgeting systems. This function was being executed as a distributed process long before the use of computers. The worst case for distributed computing is a process that is both monolithic and synchronous: The computation is difficult to break into independent entities. Further, if it can be broken up, the entities need immediate access to the same data and intermediate results. Exact modeling of physical and engineering processes falls into this category. Certain scheduling and allocation problems, such as assigning seats for airlines, fit here as well. Most practical computation falls somewhere between these two extremes. The history of computing, coupled with the inertia inherent in most computer training programs, fosters the assumption that most problems lie closer to the monolithic/synchronous model. By habit, systems designers and applications developers tend to apply traditional solution sets. Unfortunately, these approaches have more often been based on the capabilities of the technology then available, rather than on a careful analysis of the requirements of the problem space. Traditional assumptions and habits are frequently wrong. Modularity The "structured revolution" of the late 70s, led by Yourdon, DeMarco, and others, showed that most problems were not monolithic: They could-- indeed, should -- be viewed as modular. At the same time, many monolithic problems that had defied exact solution were being success- fully attacked through linked, modular approximations. Two examples of modular approximation with broad application are cellular automata and finite element analysis. As a result of these breakthroughs, hierarchies of small subroutines were "in"; huge code blocks were "out." Today, modular coding and design are predominant. Synchronization When this revolution began, many organizations operated with a single computer. For those enterprises with more than one system, inter-computer communications were slow and costly. The issue of synchronous versus asynchronous process interaction was rarely considered. Economical intra-computer communications were measured in parts of seconds. Economical inter-computer communications were often measured in hours or days. To be successful, distributed computing had to be very asynchronous: that is, they can tolerate very long communication and execution time delays between modules. Faster, cheaper computers and external communications have effected dramatic changes. Today, most organizations have more than one computer. Economical inter-computer communications are available locally in fractions of seconds and globally in seconds or minutes. The required level of process synchronization should be a major design issue. Most organizations now view processes as modular. Unfortunately, they have not taken the next step. Few designers consider how synchronous a process must be when they are building a system. Experience suggests that when an application is correctly analyzed, the answer to "How synchronous is it?" is often "Not very." Rarely does an application have synchronization requirements that must be measured in seconds; synchronous requirements of less than a second are even more rare. This does not mean that applications built with split-second exchanges and elaborate control of information are rare. It does mean that many systems could be built without these features. This would not be an issue if elaborate controls and close synchronization were free. Unfortunately, increased controls and synchronization lead to decreased flexibility in bandwidth use and increased complexity of system code. The result is increased cost of both development and operation. Implications What does this mean to the system architect? Many PCs and workstations literally throw away most of their CPU cycles. Even the average desktop machine is more capable than the mainframe of just a few years ago. So why is any central computer capability required at all? Can't you "farm out" the computations and data storage to a network of these small machines, treat them as peers, and simply use some scheduling system to allocate jobs? For systems with separable processes, as well as for those for which the interprocess communication speeds are faster than the synchronization requirements, the answer is, with some reservations, "You can." The key problem is that it is not a trivial task to "simply use some scheduling system." Architectures There are two fundamental approaches to the scheduling problem. The most frequently used relies heavily on the familiar monolithic/synchronous model of the past. Machines on the network are specialized to be either mainly computational or mainly data storage. The data stores are known as servers. The computational machines are clients, since they are served by the servers. Data control and process synchronization in the client/server model look remarkably like the forms used in mainframe/terminal computing. To the degree that individual machines handle both computation and data storage in a shared fashion, the system becomes a peer-to-peer network. Object Oriented Approach The other approach to the scheduling problem is Object-oriented. Two concepts of Object orientation are most relevant to distributed processing: encapsulation and structured messaging. Simplistically, the concept of encapsulation states that information is a combination of data and method, and that an Object treats the two as one. If a User wants to know something about the state of an Object, the User asks a question that the Object understands. Internally, the Object executes some appropriate method or process on its data and prepares an answer in a known format. In a "pure" Object system, at runtime neither the internal data nor the internal processes are available to the User. Structured messaging is the set of agreed-upon formats for queries to and answers from objects. In a "pure" Object system, all exchanges between objects, and, therefore, all program and scheduling control, exist as a series of structured messages. Addressing information is also imbedded in the messages. Additionally, there may be authentication information that tells the Object that the request is legitimate and should be answered. A timestamp may be included, as well. Object Independence The Object does not care how it received the query, nor how the answer is routed. It does not matter, so long as there is a fail-safe routing system for structured messages, whether the other Object is on the same machine, in the same network, or even on the same planet. By the same token, so long as the structured message is in the agreed-upon format at both ends, or can be translated en route, it does not matter whether the other machine is of the same class, operating system or type. (Sensors and humans can, and do, generate and receive Object-formatted information). Physical and logical juxtapositions become a problem only when a messaging delay exceeds the synchronization requirement in the process. Thus, the key features for distributed computing are: o Structured messaging enables multi platform, heterogeneous computing. o Encapsulation ensures that a local (that is, internal to the Object) change in process or data structure, so long as it does not change the semantic content, does not propagate through the system. Distributed Allocation Example A portion of the airline seat-scheduling process provides an illustration. Such an application is not normally considered a candidate for distributed processing. It is usually asserted that a centralized system is required, because: o The system has a high data access rate: Many "hits" are made on the data in a short time. o Schedule and price information are updated frequently. o The software must include appropriate data locks and releases to avoid unintentional double-booking of assets. However, if the process is examined from the Object-oriented approach, a case for distribution can be made. For example, consider the following object and object interaction descriptions: 1. An Object for each "City Pair" resides on some network node. The city pair Object would "know" the flight and fare information within applicable date windows for all useful flights by all airlines between the cities. 2. The system also contains, on appropriate nodes, a set of "Flight" Objects. When a flight number/day becomes available for scheduling, a Flight Object is created, along with appropriate seating and fare type assignments for the probable equipment. The Flight Object periodically queries the City Pair Object for current fare information and applies that information to all unbooked seats. 3. An agent wishing to book a flight activates the "Air Travel" Object in his computer. That Object queries the City Pair Object for a list of possible flights. The City Pair Object prepares and sends the list. After making a selection, the agent directs the Air Travel Object to query the Flight Object for seat availability and pricing. The Flight Object sends the information and notes an "open" communication with the agent (perhaps one of many open communications). If the Flight Object has an internal state change, such as a change of price or a seat booking, it sends a message to the Air Travel Objects tagged as "open communications." The change of state updates the Air Travel Objects, which then update their displays. 4. When an agent books a flight, the following steps are executed: o The appropriate seats are flagged. o Permanent prices are assigned to the booked seats. o Status messages are sent to all open communicating Objects. o The agent's Object closes communications with the Flight Object. State change messages are used instead of data locks to reduce the chance of duplicate requests for assets. So long as the syntax and semantics of the exchanged messages are maintained, no Object knows or cares how any other Object actually does its job. An internal change does not require cooperating objects to change. And, so long as the response time is adequate, the level of distribution is irrelevant to the functionality. Parting Thoughts It is reasonable to ask, "If it is so easy, why don't people do it that way?" The glib answer is that people have been building loosely-coupled, asynchronous systems on this model for over twenty years. Yet, few people build systems in this manner, for two principle reasons. The first problem is that the traditional monolithic/synchronous training and habits get in the way. System planners are not generally taught to think in an object-oriented fashion. The second problem is that of routing the Structured Messages. Until recently, there was no standard infrastructure for message routing: Each system routing mechanism had to be custom-built. While for some types of systems this can be simple and straightforward, for others it was complex and costly. However, the current Object-oriented "revolution" is now bringing such standards for the infrastructure to life. Common Object Request Broker Architecture (CORBA) is an important first step. Many companies, including Digital, are basing groupware and open systems products on this standard. The CORBA standard, and others like it, will be key components of distributed systems in the future. Look for them when you analyze products. It is almost certain that, by the year 2000, most computing will be distributed, peer-to-peer, and Object-based. Without knowledge and understanding of this technology, it will be impossible for the system designer/developer to be effective in Information Systems. David Frydenlund is a Principal Partner in Terman Frydenlund Applied Technologies in Belmont, California. He has over twenty years of experience in systems analysis, design and construction. The partnership focuses on the leading edge of Information Technology through training, consulting and research and development. Mr. Frydenlund can be reached on the Internet at FRYDENLUND@DECUS.ORG or tfat@netcom.com. He can be reached by phone at 415.594.9487. Is Digital Killing VMS? Chris Summerfield, Phil Auberg, Brian Breton On January 11, 1994, a meeting of the Greater Boston DECUS LUG examined the question, "Is Digital Killing VMS?" In the meeting, held on the campus of M.I.T. in Cambridge, Chris Summerfield, a Product Manager from a third-party software company, argued Digital was consciously or unconsciously killing VMS. Phil Auberg, Systems Marketing Manager for Digital, argued Digital was not killing VMS and certainly has no intention to do so in the foreseeable future. A transcript of the meeting, including Q&A, follows. Additional statements by Chris Summerfield, and Brian Breton, OpenVMS Product Manager, are also included. These statements were not part of the meeting. Comments by Chris Summerfield I recently returned from a whirlwind tour of customer sites, customers of my own company in the United States and in Europe who are, because of our mutual business interests, also customers of Digital and heavily invested in VMS. I was dismayed to find so many of them making plans to abandon VMS, or at least to protect their business interests when VMS "went away", as they all seemed to believe that it would. Why, I asked, has this belief spread like wildfire in the last year? Why has the marketplace suddenly concluded that the demise of VMS is inevitable (if not imminent). A little reflection on the opinions expressed by our customers provides an answer that now seems pretty clear. A variety of different developments have shown up like signposts on the Digital computing landscape. I would like to briefly describe some of them. Middleware and Enabling Technologies are Abandoning VMS Over the years, we have observed within Digital, products and the groups and organizations existing to support them have had to compete, to some extent, for resources. But the greatest and possibly least recognized responsibility of product managers and their organizations is to ensure their products receive the corporate commitment and support necessary to provide for their long-term viability in the marketplace. If you are a product manager responsible for a world-class operating system, in other words, it isn't enough you have a world-class operating system and you make it better and better every year. You also have to ensure your operating system is supported on the hardware platforms and it has the support software -- now known as "middleware" -- necessary to make applications viable in the marketplace. We will discuss some of these middleware products in more detail in a moment. The point I want to make right now is that applications are not going to be built for VMS unless these facilitating software products are in place and supported on VMS. And these middleware products are now falling away from VMS like leaves on an oak tree after the first winter frost. Take a lesson from Microsoft. Take a lesson, even, from the Windows NT group at Digital. Wake up, for God's sake. The ship is sinking, and you think that if you provide some performance and security improvements and implement a new, better and faster file system, you will have done something to ensure a long and prosperous future for VMS?!?! That's all very wonderful, and if you offer to do it, I certainly won't refuse to accept it. But it's a little like painting the deck and buying new furniture for the lounge of the Titanic. If you don't save the ship from sinking, it isn't much going to matter. VMS is Failing to Keep up with Advances in Interactive Technology In the past 10 years, many companies have invested many millions of dollars in VMS to provide them with a secure, reliable, extensible, interactive, multi-user environment for their corporate data processing needs. Support for the latest and best graphics and desktop interactive technologies has always been a part of this future. Now that the desktop interactive environment has grown to encompass full-motion video and even some occasional audio, we are suddenly calling this interactive environment by a new name, "multimedia". More importantly, VMS has suddenly forgotten about its commitment to support the interactive user environment. VMS has suddenly become "stuck" in the technology of the 1980s and has forgotten about its commitment to keep up with the times. Digital's corporate strategy for multimedia has left VMS completely out in the cold, and VMS product management doesn't seem to care. One example, there are others, but I'll review one of the simplest, the people in APSG Light & Sound (finally, after years of nagging and because they suddenly found that they needed to really do something if they were going to keep a job at Digital) have done a fine job of implementing the DECtalk algorithms in software and making a commercial product out of it. Wonderful! All those corporate users who have been customers and supporters of DECtalk for the last decade, will appreciate this! Most of them, of course, have also been (and still are) customers and supporters of VMS. So...guess what? The new DECtalk software implementation, known as DECtalk V4.1, runs and is supported under OSF/1 and Windows NT, but not under VMS!! Why not? You want my guess? Maybe you don't, but my guess is that it wasn't "convenient" for the people in APSG Light & Sound at the time, possibly because no one in the group happened to know how to write a VMS driver, and because VMS product management was asleep at the wheel, eyes closed and mind out of gear, and much too concerned about some new internal performance improvement in the operating system to pay attention to what other enabling software products were being produced by other groups within the company. To conclude my remarks on this point let me simply say to VMS product management and the corporate management at Digital: As multimedia enabling software and "middleware" products are introduced, if you have not taken every possible step to ensure software and those products are supported from "Day 1" under OpenVMS, then you are killing VMS. You who have failed to see to it that these products run on VMS, you are the culpable parties. When the score is tallied on judgment day, you will be held responsible for the death of VMS. Digital is Failing to Provide Support for VMS on Portable Systems Another way that the face of computing continues to change, of course, is as a consequence of the incredible shrinking machine. Since the days of Eniac, computers have continually become smaller and smaller. The terminals and devices people use to interact with computers have moved from environmentally-controlled laboratories to office environments. Systems that once weighed in at several metric tons, now sit on a desktop, or even in your lap. VMS, we have always been told, was the ultimate scalable operating system. So we believed, as we were led to do, that VMS would follow us and be with us and support us across the full spectrum of corporate computing systems. Now some of the companies with whom I have recently talked and visited have over a thousand VMS systems today. They have each invested between $10 million and $100 million in hardware alone. Add to that their investment in software and expertise developed for the VMS environment, and you will quickly come to know they have a serious interest in the future of VMS. And they have all been asking the same question: "Where is the Digital laptop product that runs VMS?" We know, of course, Digital had a prototype laptop computer that ran VMS years ago. It was one of Ken Olsen's personal decisions to kill this project. Without knowing (because we weren't privy to Ken's decision at the time), we assume that it might have been in part because the product was not based on Alpha (which had not at that time been announced). No matter the reason at that time, the question of significance now is, "What are we doing about it today?" What is happening today is that Digital is on the verge of formally introducing as a product a laptop that incorporates an Alpha AXP processor. And unless I miss my guess, Digital corporate management and OpenVMS product management have done nothing to ensure OpenVMS will run on and be supported on that laptop from Day 1. Unless I miss my guess (which would delight me), VMS product management and the corporate management of Digital will once again be proven to be the culpable parties in this assassination. To conclude my remarks on this point let me simply say to VMS product management and the corporate management at Digital: If you are not taking every possible step to ensure that OpenVMS will be running and supported on the AXP laptop when it is first released as a product, then you are killing VMS. Digital is Failing to Provide a Full Implementation of DCE on VMS Another way the face of computing continues to change, is into the world of open systems. Into the playing field of interoperability between heterogeneous systems from different vendors. One of the foundation technologies for the support of interoperation in an open systems environment is the Distributed Computing Environment (DCE) from OSF. Digital DCE is Digital's implementation of the OSF DCE, the enabling software technology for the development of distributed applications. DCE provides support to multiple nodes for a variety of common services, including name services, time services and remote procedure call. [Those of you who are already familiar with DCE, please forgive me while I interject a brief tutorial... ] A group of nodes that share such services is known as a cell. The existence of a cell requires at least one DCE server. The nodes that share access to the common services are known as clients. So Digital DCE (the software product) comes in two flavors... server and client. Implementation of Digital DCE requires at least one server and one or more clients. "Digital DCE for OSF/1" provides support for both servers and clients. "Digital DCE for OpenVMS" (VAX and AXP) provides support for clients only. Therefore, it is not possible to implement a DCE configuration using OpenVMS systems alone... or OpenVMS systems in combination with other clients (such as Windows NT or Sun workstations). It is, of course, possible to implement a DCE cell using OSF/1 systems alone, or OSF/1 systems in combination with Windows NT, Windows 3.1 and Sun workstations. But not with OpenVMS. Digital has not given the market any indication that it EVER intends to provide DCE server support on OpenVMS. It may do so...someday...but it won't matter, because the message has already been received loud and clear by the marketplace... "Digital doesn't care enough about VMS to provide DCE server support from Day 1 on this platform. VMS is Digital's 'also-ran' operating system, a poor step-child that may or may not get hand-me-downs from the other operating systems groups someday. But it is not good enough to get first-line support. Whether accidentally or on purpose (it doesn't matter because the result is the same), Digital is killing VMS." Conclusions I don't think it really matters how long this list is. It wouldn't matter if I gave you 5 examples or 50 examples of the ways that Digital is killing VMS. What matters is that the message is coming through loud and clear in the marketplace. The marketplace can read the writing on the walls, but Digital management apparently cannot. There is no escape from the conclusion Digital is killing VMS. What remains is only the singular question, "Why is Digital killing VMS?" The only answers I can come up with are either a) because it doesn't know any better, or b) because it wants to. If the answer is a), it is because VMS product management has been operating with its mind out of gear, unaware of the need to ensure that its operating system, for the sake of its own survival, must continue to run on new system platforms, including laptops and palmtops, and must continue to support the enabling technology products that applications do and will require today and in the years to come. If anyone at Digital thinks there is a remote possibility VMS can survive without doing these things, someone needs to gently wake them up and let them know we're not in Kansas anymore. The days when there was a place in the world for a "data center operating system" that could remain isolated in its ivory tower and never have to interact with users...those days are long gone. If Digital thinks that it is "positioning" VMS as a server for the data center while it "positions" Windows NT as a client for the desktop, I'm afraid someone needs to break the news to them that they are "positioning" VMS for the scrap heap and "positioning" themselves for extinction. Not a "position" to be envied. Not a destination to be desired. Not a road down which many customers (or third party vendors) will ultimately choose to travel. If the answer is b), it is because Digital or VMS product management has consciously decided that VMS "doesn't belong" on the desktop any more, that it doesn't belong in the laptop, that it isn't going to make any effort to keep up with the times, with advances in technology in the rest of the industry. If the answer is b), I am even more sorry and embarrassed by Digital and its corporate irresponsibility to its customer base than I ever imagined I would be. It's one thing (a thing that can be forgiven) to be asleep at the wheel. But it's an entirely different thing to consciously and purposely drive an operating system (or a company) off a cliff. Comments by Phil Auberg I welcome the chance to speak to you this afternoon, especially those of you whose paycheck, like mine, depends on the future of Digital and the future of OpenVMS. What I'd like to do is give you a view of what the software platform strategy at Digital is designed to do over the next many years, and how I hope this will alleviate any of the concerns you might have about what we are or are not going to do with OpenVMS. My most recent assignment at Digital has been to put together a concerted strategy for the different software platforms and to try to find a way to articulate how those things work together, and pull together the hundreds, perhaps thousands of different generic efforts we've got inside of Digital, and discuss what products show up on what platforms and when and how they're going to be working together in the future. So I'm going to talk about two things here: the software platform strategy, and OpenVMS itself, and what kind of technologies are going into that platform, and where we think it's headed. So I'll start by positioning this, and telling you a little bit about where we think the industry is now and how it got there. You can think about this as being either the collaboration or the collision of two different types of computing: o one is the world of traditional systems that grew out of servers and mainframes and timesharing and traditional interactive computing, and o the other is the world of PCs and PC LANs and that sort of thing. The two worlds are different in that one is coming from the server side of the game, and the other is coming from the PC or the client side of the game. In the past, these two worlds have been very separate. Also, in these two different worlds, the standards that grew up came from different directions. The standards in server computing and traditional systems were arrived at by vendor consensus or by some sort of organization like XOpen or OSF. The people who were using the desktop machines based on DOS and a variety of other things could care less. The standards that have grown up in that particular marketplace are based on volume and on user consensus. So where value was added and what we were able to charge for on those things were very different also. The products on traditional systems were dependent upon vendor added value, increasing performance every year, we tried to make applications portable, we tried to make applications interoperable between systems, and the boxes for the most part were like servers and workstations. So on OpenVMS, we've got servers that are VAXes and Alpha machines, and we've got workstations that are VAXes and that are Alpha machines. The PCs and the PC LANs were very system vendor independent. They were driven by what used to be small software companies, and are now enormous software companies that are making incredible amounts of profit by designing software that will run across a wide variety of platforms. That market is driven by skrink-wrapped applications that will run everywhere you can possibly put them, and the number of those applications are counted in the millions, so the volume itself is incredibly different. And that market right now is changing drastically, especially with the advent of Windows NT. That market is changing in that the operating system that used to be able to only drive single-user desktop systems and portable computers is now going to be driving mid-range and high-end servers; very, very large systems doing multi-user interactive computing, as have the servers that have been running on OpenVMS and a variety of other things. So the two worlds now are colliding, and we, as a systems vendor, have got to try to figure out a way to make some sort of sense of that and provide a coherent product offering which you can use profitably. The business strategy that Digital needs to pursue to continue to make money and continue to grow in the future is to find a way to bring those two worlds together, and offer a product set that is focused not on OpenVMS and not UNIX and not NT or DOS or anything else, but focused primarily on how we build the entire thing together into one big computing environment. It doesn't mean that we abandon any technologies. What it does mean is that we put our strategic focus and in some cases put the bulk of our dollars in terms of engineering investment, into a different place. The product strategy this implies is the following. A product strategy based on open systems -- which is either the most noble or most foolish concept in computing, depending on who you talk to -- and client/server computing based on the open systems concept, and all of it based on and running on Alpha-based systems for the future, from the laptop and portable sized systems up to very large servers. All those software things running on all the hardware. The open systems view that we have today is based in three key technology areas. The first providing the standards that you've got to have on the servers. The problem here is that the standards the servers can run are not the standards the clients care about. Secondly, we've got to build enabling software that runs across whatever platforms we're going to be using in the future, and then take that enabling software and things that are readily available to every vendor in the industry and turn it into some sort of cohesive product offering through a thing we call frameworks. How do we do messaging, how do we do production systems computing, and a variety of other things. The client/server piece is based upon using that open systems technology (the standards-based stuff) to build enabling software and framework technology that lets clients talk to servers talk to clients talk to servers, and so forth. We illustrate this with a very structured diagram, the way that engineers, perhaps, would like the client/ server model to work, where my client talks to your server and your client talks to the next person's server. Probably a more realistic diagram would be to put a thousand systems on a piece of paper and scatter them around as best we can, because really every system is now a server and every system is now a client. Desktop systems are now running at performance levels that we only dreamed about in our VAXes a couple of years ago. So everything can pretty much perform all of the functions. That technology or that need gets translated into the building of hardware and software platforms. This is where the strategy is focused from Digital. This is what we think people will care about over the next several years. Because if we do our job correctly with the framework software, the enabling technologies, with implementing, for example, DCE across all of our platforms, then what's going on down on the boxes at day to day level will be of far less concern. Some of those technologies are available now. We have LinkWorks that will be providing the links between the server technologies and the client technologies in the future. SQL and eXcursion for X Windows and a variety of other things, those things are all available now. LinkWorks is not available on OpenVMS now, but it will be next year. Tomorrow some new technology shows up. OLE and ObjectBroker, the unification of Microsoft and Digital technologies to try to provide consistent object-oriented programming and use across the Digital computing environment. Also bringing, of course, other technologies from other vendors, as we have done with Microsoft, into the Digital product offering, to provide the level of unification necessary to make the servers and the clients work together as seamlessly as we can in the future. That gets us to, "What are we going to do with the platforms?" I think of OpenVMS on VAX and Alpha as a platform. OSF/1 on Alpha as a platform. And Windows NT on Alpha and Intel and maybe a lot of other things in the future (who knows?) as a platform. So, we'll address the direction we are going to take with those three key things. First, the strategy is that we not get swamped by this change in computing technology from timesharing servers over to client servers. So we've got to find a way as a vendor to combine those and have a coherent and a successful product offering. We would be absolute idiots, not only from a technology point of view, but from a financial point of view, to even think about trying to kill off or de-fund OpenVMS. Because we are getting strong user demand for all of these operating systems environments, and most often when we hear about one of our third-party software partners who is considering dropping their OpenVMS product, the scream from their user base drives them back into the fold again. We have a very heavy engineering investment in both OSF/1 and OpenVMS. We've been doing that for about 12 years now, ever since I entered Digital. We did it with ULTRIX and VMS, and we do it now with OSF/1 and OpenVMS. We've been carrying on that investment for years, and it's about equal between the two groups. We have a "partnership investment" in Windows NT. That's my marketing term for "cheaper", because we don't have to develop the operating system. All we have to do is port the operating system from one piece of hardware to another piece of hardware. So we're effectively getting that one almost for free. And we spend our time in the marketing of Windows NT on Alpha. We're then providing integration through that enabling software and the frameworks that I was talking about through common engineering. One of the things we're trying to do at Spit Brook is to find a way to do common software across the different platforms. So you'll see clusters, for example, across platforms in the future. And then, since prices speak louder than words, price OpenVMS and OSF/1 and Windows NT and everything else competitively on Alpha AXP platforms. When we illustrate a price comparison between OpenVMS on an Alpha workstation and an HP workstation running HPUX, OpenVMS is today priced at or below the HP pricing points for most of our systems. That's not the sort of thing you do when you're trying to kill off a product, or you're trying to retire it. The individual platforms. OSF/1 will probably be used in the short term, at least, most commonly in workstations and servers and workstation farms, for high-performance technical computing, and also as commercial servers, of course. The strategy here is to provide an industry-leading, modern UNIX offering. OSF/1 from Digital is OSF/1, System V, Berkeley and a variety of other things all wrapped into one package. There are about 2,000 applications that are ported over to OSF/1 right now on Alpha. The first versions of SMP and clusters on Alpha come out in the Spring. The key technology work and the key positioning for OpenVMS for the next several years is to try to place it as a business critical (mission-critical) server. It doesn't mean we stop doing desktop systems, it doesn't mean that we stop doing anything. But that's where the brain cycles go for the next couple of years. Into making that the best server for highly-available server environments that we possibly can. Strategy #1 for OpenVMS is to take the people who have been running on VAX for years, and to make it as painless as we possibly can for them to get from VAX to Alpha. And that's happening a lot more rapidly than we thought, and I think Version 6.1 is going to help that a bit more. The second strategy for OpenVMS is, tactically, in the next couple of years, to win new business for OpenVMS through the downsizing of either the old proprietary environments that came from Unisys or Data General and the like, or off of IBM mainframes, because that's money in our pockets, and can provide funds for developments in the years to come, which is focused on advanced clusters, which means not only for OpenVMS, but also spreading clusters across the whole Digital computing environment, 64-bits and a new file system. Because if you're going to be a server for enormous client/server networks, you'd better have a file system that's up to the performance levels necessary to serve those clients and can provide transparent access to VAX and DOS and NT and UNIX and everything else, and that's what our new file system is designed to be. One point I should not overlook is that we are now shipping run- time DCE on OpenVMS, and we will be implementing the entire DCE across the entire OpenVMS line. It's impossible to position Windows NT, because Bill Gates says it will take over the world. It has at least taken over Seattle, I know that much. The market now is high-performance personal systems and servers. That's not the positioning for Windows NT that came out a couple of years ago. Our goal here is to be the premier supplier of Windows NT solutions. Because we're not engineering the operating system, we're putting it on the Alpha platform. But we can, because we're a large systems vendor, negotiate the right deals with application developers and middleware developers to get the right things on the platform. We're shipping today Windows NT and the Advanced Server on our Alpha boxes. And we're also shipping Windows NT on Intel PCs. And we are moving Windows NT to PCI-based systems in the future. That pretty much wraps up the view of the three software platforms. We have to continue, and want to continue, to do OpenVMS and OSF/1 and Windows NT, because its financially prudent, and if we don't have those key server platforms in place we can't implement any part of this client strategy. We've got to have control over the server platforms which, for the next countable number of years, are certainly OpenVMS and OSF/1. All the middleware that we're developing, now and in the future, is going onto OpenVMS. The primary role, OpenVMS is going to be playing is that of a server, not as a desktop system. It doesn't mean that we de-commit from Alpha workstations, it doesn't mean that we no longer support Motif and DECwindows on those workstations. But the sheer numbers about what is happening in the marketplace says without a doubt that it's a client/server world, and the dominant desktop machine will be an Alpha or an Intel box that runs DOS or runs Windows or Windows NT. That's the reality that we have to plan for as a vendor, so that's where we have to put the majority of the engineering cycles and the marketing dollars. Thankfully, VMS is VMS everywhere, so we pretty much get our workstations for free. The focus that we will be placing in our development efforts for OpenVMS in the future will go into building the most dependable, the highest available servers you can put into any computing environment. ___________ Q&A Session The following questions were posed by members of the audience to representatives from Digital. Answers were provided by Phil Auberg; Mark Gorham, OpenVMS Product Manager; Craig Jones, Manager, OpenVMS AXP and Brian Breton, OpenVMS Product Management. Q: What does it mean to be in a cluster if you can't write applications that will operate in all the operating systems spaces? A: You can write applications that are POSIX-compliant and will run in all the environments. Hopefully in the future you will be able to run at a more sophisticated level. That's the whole idea behind standards. Our plan here is to follow those real standards that we believe are going to be here more than six months into the future, so you can write applications that run across the platforms. Q: Are there any plans to implement OSF/1 or OpenVMS as a virtual machine on Windows NT? This would provide a very natural way to migrate some applications. A: A lot of things go on in advanced development at Digital, but I'm not aware of any plans to build a virtual machine for OSF/1 or OpenVMS on Windows NT. Building that environment would be a statement that the client/server technology won't work, because you're trying to get back to the universal operating system. It's a great theory, but I don't think any of us are smart enough to make it happen. Q: What's a framework? Is this a new buzz-word for an old concept, or does it mean something new? A: It depends on whether you're Engineering or Marketing. Framework is in engineering what an architecture used to be. It's a well-defined way of making different products and technologies work together in a pre-defined fashion. What that translates to in marketing is a set of products that serve a specific need, like inter-group messaging (mail and all the rest of it), products that fit that architecture that will ship cross-platform so that everything works together. Q: I went to a talk on the Alpha PC at DECUS (San Francisco), given by a Digital representative. The speaker was talking about how it ran three different operating systems and how that was wonderful, and then she listed the advantages of the three different operating systems and why you would want to run each one of them. And the only advantage she listed for VMS was that it made it easy for you to port to the other two. A: We have since given that person a frontal lobotomy. As we migrate from a single-strategy company to a multiple technology company, it's difficult to bring 90,000 people along at once. Q: Will the Alpha laptop have OpenVMS on it, and if so, will it be there from day 1? That would certainly be an indication of DEC's commitment to VMS. A: I can't speak specifically to the Alpha laptop, because it's an advanced development project. We certainly put OpenVMS on the Alpha PC, and we would like to drive OpenVMS down to every Alpha system that we can possibly put it on. Whether or not it goes on the laptop remains to be seen. A lot of it will come down to whether or not the company believes that it will sell. ... The question is, when it comes times to make trade-offs, when it's time to spend the dollars, where do we invest? I don't think the Alpha laptop or any other sort of laptop machine would be my number one investment priority for OpenVMS. Q: The salability of it really goes to the dollars. It's the old argument that if you cut your price in half you're going to sell 5 times as many, some people say 8 times as many, whatever. If you fix the price right, you will sell a phenomenal number of the things. People respect and love VMS. People who don't use it may respect it, may not love it, but if you price it right, make it affordable (and don't do what you've done before with respect to pricing), people will buy it and use it because it damn well works and it's rock solid, unlike so many other things. A: I don't know if many people know, but the price of VMS on the Alpha PC is now $595. So we're getting there. Q: But it's more than just selling that box. It's keeping faith in VMS, faith and credibility in Digital. Digital has to fight for its own credibility in a marketplace that is largely non-plussed by its presence. A: Well, one of the edicts for any operating system at Digital is, "Thou shalt support new hardware". I can guarantee you that at present no decision has been made. It's really a business decision as to whether we should spend the resources on this, is there a bang for the buck. Additional Comments by Chris Summerfield Phil Auberg, in his presentation, said that, "every system is now a server and every system is now a client." It is now fairly broadly recognized, I think, that the distinctions between "client" and "server" are mostly semantic. They change from moment to moment, based on the needs of applications. In the real world of distributed applications, most systems perform server functions for some applications, client functions for others, and sometimes both at the same time for other applications. Yet Digital's justification for why it's "O.K." for VMS to ignore multimedia, for example, is apparently based on the argument VMS is being "positioned" as a server system, not as a client system. This is the same argument Digital will present, when pressed to the wall, for why they aren't much interested in putting VMS on a laptop. Because VMS is being "positioned" as a server operating system, rather than as a client operating system, it isn't appropriate (any longer) to implement under OpenVMS those things that are appropriate for the desktop or for human user interaction. Because Digital has conceded this marketplace to Microsoft. In my view, Digital has effectively said: o We give up, we concede the race for the desktop to Microsoft. Our contribution to the technologies that are used on the desktop (and laptop and palmtop) will henceforth be limited to whatever we can do to "enhance" (ride along on the coat-tails of) Microsoft. o Those of you (our customers) who have invested in VMS in the belief that we would continue to support new desktop technologies in this environment as the market evolves and moves to laptops and portables... well...we're sorry, but we can't do that anymore. We're sorry, but if you placed your trust in us, it was misplaced. o We aren't "positioning" OSF/1 as either a "server operating system" or a "client operating system". And Microsoft isn't positioning Windows NT as either a "server operating system" or a "client operating system". No one else, of course, is positioning their operating system as either one to the exclusion of the other. Because the distinction is really semantic, and we realize that an operating system has to be both to be viable. But we can't think of any other excuse or any other marketing "spin" to use here...as we begin the long and painful process of putting VMS out to pasture. And who knows, if it turns out that there really is a place in the world for a "database server" operating system, then we will have the world's best. If not, then VMS will be history...but we'll still have the world's best UNIX, so perhaps we will still have your business...because at that point in time, after all, what better choice will you have? Additional Comments by Brian Breton At the demand of its customers, Digital is moving to a product strategy known as Open Client/Server computing. This is a change in direction and it is important to understand this change in order to address the issue of how Digital will use the different software platforms in its portfolio. These Platforms are OpenVMS, DEC OSF/1, and Windows NT. Open client/server computing uses standards based technology to enable our customers to build cost-effective information systems using a wide variety of technology from Digital and other vendors. Standards based on vendor agreement or broad user acceptance (de facto) will be used. The software platforms are TACTICS used to implement client/server computing. They are not the strategy itself. The "middleware" built or marketed by Digital will enable and define the open client/server environment. This middleware will run, to a great extent, on all of the platforms Digital supports. Some middleware will be split into client and server components, so it is difficult to think of all middleware running on every system, however. The software platforms. We are investing heavily in OpenVMS and DEC OSF/1 because we build and market both systems. We have a "partnership" investment in Windows NT - a significant investment in marketing and a small engineering investment driven by the fact that Digital does not develop the NT operating system. What about OpenVMS specifically? We are putting our new middleware on OpenVMS. We are pricing OpenVMS systems aggressively to remain competitive in the marketplace. We are shifting our engineering investments in OpenVMS toward those technologies that will make it an even better server for a wide variety of desktop, and portable, clients. We are adding 64-bit capabilities and a new file system - for multimedia and the data that will be created by the "information highways" of the future. We have supported the full line of Alpha systems to date - maybe there will be a laptop in the future. We are building a full implementation of the DCE. Summary The Digital product strategy is Open Client/Server computing. The key software platforms enabling this strategy are OpenVMS, DEC OSF/1, and Windows NT. Digital is optimizing OpenVMS as a server to a wide variety of clients with a focus on business critical computing while continuing its support for the full VAX and Alpha AXP product lines - workstations through high-end servers. The software platforms are integrated through standards and standards-based middleware. OpenVMS is a long term, strategic platform for Digital - we are helping our VAX customers move to Alpha AXP, pricing competitively, and developing new technology for the demands of 21st century computing." _____________________ Chris Summerfield is a product manager for Systemetrics, Inc., a third-party software house that for the past ten years has specialized in providing system management software for Digital systems. He has been actively involved in software design, product development and project management since the days of RSX-11M and RSTS/E. Chris is currently directing software development projects for OpenVMS, Windows NT, OSF/1 and Solaris environments. Phil Auberg is a marketing manager and technology evangelist in the Systems Marketing Group at Digital. During his 12 years at Digital, Phil has worked in product development and promotional efforts for RT-11, the PROfessional series, VAXstations, DECstations, ULTRIX, DECwindows, OSF/1, and OpenVMS. Phil is currently working on marketing and product development issues related to Digital's software platform strategy. Brian Breton is a Senior Software Product Manager with responsibility for Digital OpenVMS AXP Operating System. Brian has also been the Digital Engineering Counterpart to the DECUS VMS Systems SIG since 1990. Brian is an active participant in DECUS notes conferences on DCS and DECUServe. OpenVMS AXP Leaps to Version 6.1 OpenVMS AXP is now Functionally Equivalent to OpenVMS VAX Tim Ellison, Jody Little, Mary Jane Vazquez Beginning with the initial release of OpenVMS AXP Version 1.0 in November of 1992, Digital's OpenVMS Engineering group has been working diligently to provide the same functionality in OpenVMS AXP that has been available in the OpenVMS VAX operating system. With the upcoming release of OpenVMS VAX (April, 1994) and OpenVMS AXP (May, 1994), Digital is pleased to announce both operating systems will offer functionally equivalent capabilities. Version Numbering Revision: OpenVMS Engineering wants to accurately reflect the fact that the next releases of OpenVMS VAX and OpenVMS AXP will provide equivalent software capabilities. Therefore, the version numbering scheme for the OpenVMS AXP operating system will be modified to match the OpenVMS VAX version number. Both OpenVMS VAX and OpenVMS AXP will use the "Version 6.1" numbering scheme in order to signify the functional equivalence of the essential operating system components. Operating System Environment Overview: This article provides a high level overview of functional equivalence for the OpenVMS AXP and OpenVMS VAX platforms. Additional literature, explaining the new features for these releases, will be available when both OpenVMS AXP V6.1 and OpenVMS VAX Version 6.1 are released in the near future. In reaching functional equivalence for both OpenVMS platforms, it is very important to note the same user environment, reliability, availability, desktop to datacenter scalability, and standards compliance will be available on both the OpenVMS AXP, and the OpenVMS VAX operating systems. OpenVMS AXP V6.1 and OpenVMS VAX V6.1 will allow current VAX system owners to build on their investment in VAX systems, while providing the opportunity to take advantage of the raw computing power of Alpha AXP systems. It is the goal of OpenVMS Engineering to maintain functional equivalence for future versions of the OpenVMS VAX and AXP operating systems. One of the main areas of emphasis will focus on supporting mixed architecture clusters (VMSclusters consisting of OpenVMS VAX and AXP systems). As we take advantage of the architectural richness of Alpha AXP systems (as with 64-bit virtual addressing), we will continue to protect our users investments by maintaining co-existence in the mixed architecture cluster environment. Both operating systems will offer functionally equivalent capabilities for the main areas of the operating system. In order to compare these capabilities, the operating systems have been categorized into six areas: o User Environment o Security Services o System Management Environment o Programmer Environment o System Integrated Products o Standards Compliance User Environment Both OpenVMS AXP and OpenVMS VAX systems provide the capabilities that OpenVMS users have come to rely on. OpenVMS users are provided with the same user environment, editing capabilities (EDT, EVE, TPU), windowing environment (DECwindows Motif), and the same familiar utilities such as DCL, Mail, and Phone. The "look and feel" of the OpenVMS AXP environment is exactly the same as the environment that OpenVMS VAX customers depend upon today for their business needs. OpenVMS users who are moving from VAX systems to Alpha systems, or users who will be working in a VMScluster (combination of Alpha and VAX systems) environment, will be able to immediately use the OpenVMS AXP environment without any retraining. Security Services From a user perspective, OpenVMS AXP provides the same security services as OpenVMS VAX. C2 security features introduced in OpenVMS VAX V6.0 are also in both OpenVMS VAX V6.1 and OpenVMS AXP V6.1. OpenVMS AXP V6.1 has not completed the C2 certification process, however the security features in OpenVMS AXP V6.1 are designed to meet C2 certification and Digital's goal is to have OpenVMS AXP participate in the C2 evaluation in the future. System Management Environment Both OpenVMS AXP and OpenVMS VAX systems provide the utilities and tools system managers rely on to keep their Alpha AXP and VAX systems running smoothly. Utilities such as SYSMAN, SDA, Monitor, Show Cluster, Cluster_Config, and Authorize all provide the same capabilities and provide the same interface for the system manager. Tools such as Backup, Analyze, and Mail Compress are also available on both platforms. Programmer Environment OpenVMS AXP and OpenVMS VAX systems provide a rich programming environment offering a number of programming tools. Both operating systems provide the Record Management, Command Definition, Librarian, and Message Utilities to assist programmers using the OpenVMS environment. Run-time libraries, and public symbols are part of the OpenVMS programming environment, as well as the OpenVMS Debugger and the OpenVMS Linker. System routines and system services are provided on both operating system platforms to supply programmers with an attractive set of tools. System Integrated Products The OpenVMS AXP and OpenVMS VAX operating systems offer the same set of System Integrated Products (SIPs). These products - VMSclusters, Shadowing, and Journaling, provide the OpenVMS environment with a wealth of capabilities that allow consumers to expand the realms of reliability, availability, and scalability. These SIPs offer a significant advantage in today's competitive business environment. OpenVMS VAX V6.1 and OpenVMS AXP V6.1 have VMScluster functional equivalence. All the features that were previously available with VAXcluster software are now supported by VMScluster software (clusters that contain a mix of VAX and Alpha AXP systems or are configured entirely of Alpha AXP systems). VMSclusters provide the world's finest clustering solution, and give unmatched investment protection for Digital's customers who wish to continue to use their VAX's but also wish to take advantage of the power and performance of Alpha AXP. Standards Compliance Both the OpenVMS AXP and OpenVMS VAX operating systems provide the standards compliance that is necessary in the multi vendor environment of today. The DECwindows Motif and POSIX products will be available for both operating systems. Both operating systems are ISO 9660 compliant, will provide Distributed Computing Environment (DCE) capabilities in March of 1994, and will attain XPG4 Base Branding, and FIPS 151-2 certification by August of 1994. Both OpenVMS AXP and OpenVMS VAX comply with a number of other ANSI, FIPS, and ISO standards that can be referenced in the SPDs (Software Product Descriptions) for each operating system. OpenVMS provides a seamless transition from the OpenVMS VAX environment into the OpenVMS AXP environment. This allows OpenVMS customers to take advantage of the leading edge computing power of Alpha systems, without costly employee retraining. Availability The OpenVMS VAX V6.1 and the OpenVMS AXP V6.1 operating system releases will be available in April and May, 1994, respectively. Please contact your Digital representative for ordering information. Similarities and Differences of OpenVMS AXP V6.1 and OpenVMS VAX V6.1: Please note that due to differences between the AXP and VAX architectures, there are some differences in how pieces of the operating system have been implemented for OpenVMS AXP. The documents listed below are being updated to help system managers understand all the similarities as well as the few remaining differences between OpenVMS AXP and OpenVMS VAX. Most of the few remaining differences will be resolved in the releases that will follow the Version 6.1 releases. Please take the time to review these documents when the updated versions become available with the release of OpenVMS VAX V6.1 and OpenVMS AXP V6.1. Reference Material: "OpenVMS Compatibility Between VAX and AXP" Order Number: AA-PYQ4B-TE "A Comparison of System Management on OpenVMS VAX and OpenVMS AXP" Order Number: AA-PV71A-TE Tim Ellison, Jody Little and Mary Jane Vazquez are a part of the OpenVMS Product Management team at Digital Equipment Corporation. Setting DECnet-VAX Executor Pipeline Quota Clyde Smith There has recently been a lot of confusion in the user community over how to set the DECnet-VAX parameter Pipeline Quota. This parameter only affects DECnet traffic. This has no effect on LAT, LAVC, TCP/IP, Novell, or any other type of traffic on a network. Digital DECnet-VAX Engineering recommends a value of 10,000 for this parameter, 10,000 is the default value. Pipeline quota represents the maximum number of bytes any DECnet logical link can use. This memory use is NOT charged against the users BYTLIM but is charged against the operating system. DECnet converts the maximum number of bytes each logical link can use to a variable window count. The maximum number of windows each logical link can use is determined by (pipeline_quota/exec_buffer_size) or (10000/576), if the default parameters are used, for a maximum window size of 17.36 truncated to 17. The coded maximum number of windows allowed is forty(40), the minimum is one(1). To determine the maximum effective value for pipeline quota you can use (40*exec_buffer_size) or (40*576) = 23,040. If your exec buffer size were set to 1498 the maximum effective value would be (40*1498) = 59920. Once the maximum window count or window size is determined DECnet is able to adjust the current window size dynamically to adjust for line conditions, and speeds. This operation is done dynamically by DECnet, based on the time it takes a packet to reach the target node and the acknowledgment to return, and the number of packets that are timed out without the receipt of an acknowledgment from the target. The higher the error rate, and round trip propagation delay, the lower the number of packets DECnet will allow to be outstanding. As the line conditions degrade or improve DECnet adjusts the window size. Because errors require the retransmission of data DECnet attempts to minimize the number of packets that have to be retransmitted in the event of failure. At the same time logical links over more reliable media are not penalized. In an error free environment such as FDDI or Ethernet where the error rate is not expected to be higher than 1.0^-15 or 1.0^-7 respectively, a very high pipeline value would insure the maximum use of the resource(s) with little risk of retransmission. Errors include, but are not limited to, systemic Ethernet or FDDI failures as well as overrunning of low performance single buffered controllers. Conversely a low to moderate speed logical link through multiple routers over leased lines, perhaps transcontinental via satellite, typically has a high error rate requiring more link level retransmission. In this case, it would be more effective to minimize the number of outstanding packets. Bridges, either local or remote, or satellite links, or other links with high error rates can delay or discard packets. Based on the above, 10,000 represents the OPTIMUM value for pipeline quota. There may, however, be situations or installations that would benefit by increasing and, to a much lesser extent, decreasing the pipeline quota (since all users are penalized). Following are some hypothetical case studies. **NOTE** these cases are VERY general and are not specific recommendations. The specific applications and topology of each network is different, the increasing or decreasing of pipeline quotas and other DECnet parameters may adversely affect performance on your network. **NOTE** Over 99% of the performance issues reported to the CSC are directly related to systemic Ethernet failures or low performance single buffer Ethernet controllers. Case #1; Equipment: 2 eight port DELNIs cascaded (no other backbone). 14 4000 model 90, or ALPHA workstations 128MB of memory. 1 4500, 4 TLX08 tape drives, 128mb of memory. Topology: All fourteen(14) workstations are standalone engineering systems connected to a discreet DELNI port. The 4500 is connected to the remaining DELNI port. Application: CAD/CAM, X Windows, Large file transfers, cterm (set host), mail, DFS, and backups to the 4500. Other considerations: The error rate is less than 1.0^-8 on the Ethernet. The backups are performed when no other users are on the systems. There are no routers on this Ethernet. Parameters: Exec Buffer Size = 1498 Pipeline quota = 60000 Maximum window = 40 (60,000/1498 = 40.05) Users report network performance as good to very good. Typically seven (7) to ten (10) packets would be outstanding on any logical link during a large file transfer, DFS, or X Windows display. The target node does attempt to empty the pipe as quickly as possible. However in some cases the outstanding packet count can be seen in the high teens, and low twenties when the target node is otherwise occupied by higher IPL processing. Note that as the number of windows outstanding approaches half of what's allowed the source node requires the target to respond with immediate timed ACKs to insure the sender can determine an accurate round trip delay time. Case #2 Equipment: 2 DELNIs cascaded 12 DEPMRs 250 286 or 386 PCs with single buffered Ethernet controllers 2 6640 VAX file servers, CI clustered, tape drives, printers Topology: The two 6640 file servers are connected to DELNI ports, the remaining DELNI ports are in use by the twelve DEMPRs, each DEMPR has all eight ports in use with a varying number of PCs on each Thinwire segment. Application: PATHWORKS disks and files services, LAT connections, X Windows, file transfers, and backups. Other Considerations: The error rate is high, greater than 1.0^-2, about normal for this type of PC environment, but lower than sometimes seen. This is typical with any type of single buffered Ethernet controllers. The single buffered controllers can EASILY be overrun by moderately high performance controllers typically found on the 6640 class machines. Parameters: Exec buffer size = 576 Pipeline quota = 10,000 Maximum Window Size = 17 Users report performance as poor to fair with brief periods of unsatisfactory and brief periods of good. Hundreds of logical links with outstanding windows varying from 1 or 2 on some logical links to 7 or 8 on others. The large variance is because of decants tendency to start with a large number and gradually decrease the number of outstanding packets as the link appears to deteriorate. Hence as each logical link is established it expects a good medium and is adjusted toward what we'd expect with a poor medium. Improvements in performance might be obtained by expecting a poor medium and limiting the number of outstanding packets from the start. This will reduce the overall traffic on the wire by reducing the transmissions. In addition, the number of packets arriving at any particular PC sequentially would be reduced thus decreasing the tendons to overrun the low performance controllers. Another approach might be to take advantage of the way PATHWORKS does flow control and reduce the EXEC RECEIVE PIPE (window size) from the default of six (6) to a value of one (1) or two (2). PATHWORKS uses SEGMENT flow control and extends a fixed number of credits to the originator (in this case a VAX). By reducing the value to one, the PC will extend one credit at a time, once the VAX transmits a packet it MUST wait until another credit is extended by the PC helping to prevent controller overrun and reduce the number of packets retransmitted on error. **NOTE** This would have to be done on ALL low performance controllers. It's frequently impossible to determine exactly what the controller is capable of handling. All Digital PC controllers are designed with adequate memory for most applications. At the same time, the PC is able to transmit as many buffers at it likes to the VAX. The VAX does not impose a credit limit on the transmitter. Case #3 Equipment: 2 DELNIs cascaded 1 DELNI stand alone (separate LAN segment) 2 6640 servers CI clustered 6 4000 model 90 or Alpha workstations 50 386 or 486 PCs with DEPCA or other Digital controllers 50 386 or 486 PCs with single buffered controllers 1 DECnis 600 BRouter 2 DECnis 500 BRouters 12 DEMPR multiport Thinwire repeaters 1 DEMPR single port repeater Topology: The twelve DEMPRs, one DECnis-600, and two 6640 are plugged into individual cascaded DELNI ports. The DECnis-500 bridges and routes the DECnis-600 via T1 HDLC connection. The DECnis -600 bridges and routes to an FDDI ring comprised of the 6 high performance workstations. The PCs are equally distributed on the DEMPR ports, and on single port repeater that's connected to the remote DELNI. Application: PATHWORKS disks and files services, LAT connections, X Windows, and backups CAD/CAM, Large file transfers, cterm (set host), mail, DFS, and backups to the 6640 servers . Other considerations: The error rate on the FDDI ring is <1.0^-15%. The error rate on both the single DELNI segment, and the cascaded DELNI segment is ~ 1.0^-5. Parameters: High Performance workstations; exec buffer size = 1498 pipeline quota = 60,000 (max window = 40) line buffer size on FDDI = 4462 VAX 6640 clustered servers; exec buffer size = 576 pipeline quota = 4032 (max window = 7) line buffer size for Ethernet = 1498 Users report network performance as very good to excellent on the workstations, except when using DFS or moving a large file from the 6640 cluster, fair on the PCs, with frequent periods of poor performance. The number of outstanding packets varies dramatically from 1 to 10 or 11 depending on the source and target nodes. Higher saturation is observed on traffic moving from the workstations to the 6640 cluster, packets in the range of 4 or 5 are seen from the 6640 to the workstations. The PATHWORKS links between the 6640 and the single buffered controllers is in the 1 or 2 packet range. PATHWORKS links between the 6640 and the high performance controllers hover around the 4 and 5 packets outstanding on each logical link. Performance improvement might be realized simply by increasing the pipeline quota on the 6640 machines to 10,000. This will have the most impact on the 6640 to workstation file traffic, and between the 6640s and PC with higher performance controllers. This will probably be detrimental to the remaining PCs because of the large number of retransmits on new logical links. This might also increase the overall network traffic. To help reduce the problems on the single buffered controllers, and reduce retransmission traffic the RECEIVE PIPE on the lower performance PCs can be set to 1 or 2. The ideal solution would be to replace the low performance PC controllers. Since that may not always be practical at least the users of the high performance controllers won't be penalized. Other points of interest The single target resource consumed by a large pipeline quota is physical memory. In reality the pipeline includes the packets on the wire as well as the packets that the DECnet driver (netdriver) has control over. Packets that are in transit (on the wire) are maintained in memory until ACK'd by the receiver. Any logical link has the potential for using the entire value of the pipe at any one time. If you have max links set to 100 and pipeline quota set to 23020 you have the potential for consuming 2,302,000 bytes of non-paged memory. We've observed up to 50-60% throughput improvements on SINGLE logical links when the value is increased from 4,032 (sometimes recommended) to 10,000. At the same time, we've never been able to achieve more than a 2-4% improvement by increasing the pipeline from 10,000 to 23,020. The tests we ran were from 3100, MicroVAX II, and 6000 class machines to DECpc 452st systems with DEPCA controllers, on a clean Ethernet. Clyde Smith is a senior systems engineer for the Network Support Team at Digital's Customer Support Center in Colorado Springs. He has been developing network software and supporting networking products for over 20 years. Accessing Digital Equipment Corporation Information over the Internet Online Information Digital maintains an archive of public domain software and product and service information. The product and service information includes: o information sheets o technical overviews o performance summaries o brochures o software product descriptions (SPDs) o white papers o presentations o Digital's Systems and Options Catalog o back issues of the Digital Technical Journal o Digital's Networks Buyers Guide. The information is accessible using the Internet access tool of choice -- FTP or any World-Wide Web browser. Instructions for FTP access: 1. ftp gatekeeper.dec.com (16.1.0.2) 2. login anonymous 3. cd /pub/Digital/info All information is in either ASCII text or PostScript form. Each document has an abstract file (.abs) that gives a single paragraph overview of the document. Instructions for World-Wide Web access: URL: http://www.dec.com/info.html Electronic Newsletters DECnews for Education and Research is a monthly electronic publication distributed to educational and research communities worldwide. To subscribe, send a mail message to listserv@ubvm.bitnet or listserv@ubvm.cc.buffalo.edu The message should be this command: SUB DECNEWS Firstname Lastname The command is the text of the message; the subject is ignored by LISTSERV. Digital News for UNIX, an electronic newsletter, is distributed via Internet every three weeks. It contains UNIX specific product and service information of interest to UNIX customers. For subscription information, send a request to decnews-unix@pa.dec.com with a subject line of "help". Newsgroups & Distribution Lists biz.dec is Digital's newsgroup for posting business information on products, services, significant contracts, organizational announcements, cooperative marketing agreements, alliances, seminars, promotions, etc. Other newsgroups about Digital products include: o comp.unix.osf.osf1 o comp.unix.ultrix o comp.sys.dec o comp.os.vms The "Answers to Frequently Asked Questions" postings from some of these newsgroups are available via FTP from: crl.dec.com: /pub/DEC/dec-faq In addition, Oakridge National Labs maintains a mailing list for Alpha OSF/1 managers. To subscribe to this list, send mail to majordomo@ornl.gov with a subject line of "subscribe alpha-osf-managers". Internet Accessible Alpha AXP Demo Systems Digital provides Internet access to Alpha AXP systems so users can evaluate the Alpha AXP architecture and test the functionality of the supporting operating systems, compilers, tools, and utilities. To test drive a DEC OSF/1 AXP system, telnet or rlogin to: axposf.pa.dec.com or 16.1.0.14 To evaluate an OpenVMS AXP system, telnet or rlogin to: axpvms.pa.dec.com or 16.1.0.15 For either system, the user name is axpguest. No password is required. Internet Mail Address For Information About Digital An electronic mail message sent to info@digital.com gives an automatic response with many of the above pointers, plus phone and FAX numbers to obtain additional product and service information. The 5-Minute Interview Groupware Dennis Roberson Q. The word "Groupware" has many definitions among the various computer companies out in the marketplace. What is Digital's definition of Groupware? A. Companies seeking quantum leaps in efficiency are now recognizing that group productivity is perhaps more important than individual productivity. Daily communications, forms processing, filing, analyzing information, and scheduling events are among the many tasks that require group interaction, and which are typically fraught with redundant, time-consuming activity. Workgroup computing is helping teams of employees work together more effectively. Through the use of group-oriented software, coined "groupware," companies can automate repetitive tasks and group interaction to minimize these daily processes, allowing for more efficient, profitable operations. The new groupware control can also be used to align workflow with business objectives in ways not previously possible. In a January 7, 1994 article, Datamation described three alternative approaches to workgroup synergy, groupware suites, groupware frameworks and dedicated groupware. Suites are bundles of applications from one vendor that have grown into well-integrated collections and are available in one box. Digital's TeamLinks is an example of a groupware suite. A newer approach is embodied in frameworks, external environments that layer workflow, document management, sharing and other workgroup functions on top of existing applications. Frameworks put all the hardware, operating systems, transports, databases and applications under one umbrella. Digital's LinkWorks is an example of a groupware framework. Dedicated groupware workgroup applications that are their own environment. Lotus Notes is an example of a dedicated groupware application. Q. What do you see as the role of wireless communications in Groupware? A. Wireless communications will be basic to Groupware. So far, wireless has been seen by the industry as a way of meeting the needs of globetrotters or road warriors (as they've been called), but far and away the larger group of mobile users are "local"--within the building or on the campus. One recent survey suggests that almost 70% of the absence of professionals from their customary areas of work falls into this category. This group needs to be supported in close-order teaming. Through what we call "information automation"--the intelligent selection and routing of information by agents using natural-language processing--we need to enable learning and action in the very important short term. That's the focus Doug Engelbart was after when he talked about using high-powered electronic aids to exploit "hunches, cut-and-try, intangibles, and the human 'feel for a situation.'" Wireless computing will be a key element of systems to support the multiple, overlapping, task-oriented teams who should be the principal beneficiaries of GroupWare. Q. What strengths or innovations does Digital bring to the Groupware market? A. Digital pioneered what we know today as "workgroup computing" and "groupware" before these terms were in existence. As early as 1981 the company introduced the ALL-IN-1 Integrated Office System, allowing users to share documents, manage document files, and exchange documents using E-mail. At the time, these were revolutionary advances. Ten years later, Digital extended this capability to personal computer users with the introduction of TeamLinks, a suite of groupware applications which helped automate business processes via X.400 mail, conferencing, library services, and group scheduling. Today, more than four million users worldwide work more effectively together using ALL-IN-1 and TeamLinks. By October of 1993, Digital had launched the next generation workgroup computing framework called LinkWorks, allowing workgroups to share information and documents regardless of format, location or computing platform. Digital's messaging backbone, MAILbus, connects workgroup products with each other and with resources outside the enterprise. With more than fifteen years of experience in delivering distributed systems, and more than seven million mailboxes worldwide, Digital has the largest installed base of any electronic mail vendor. Today Digital continues to offer its customers workgroup computing applications with a strategy based on the company's own heritage in groupware. This strategy not only protects their current investment in information technology, but provides them with the new technologies they need to run their businesses more effectively. Dennis Roberson, Vice President, Groupware, Digital Equipment Corporation, manages Digital's Software Engineering group responsible for the development, integration and marketing of products and technologies to enhance the Group productivity. The group is responsible for Software Frameworks (ALL-IN-1, TeamLinks and LinkWorks), Mail/Messaging products (MAILbus, MailWorks, and EDI), Production Document Management Products and a focus on new and emerging technologies. His organization includes groups in Holland, the United Kingdom, Ireland, and the West and East Coast of the United States. Mr. Roberson currently serves as chairman on the Open Software Foundations (OSF) Board of Directors. "I'll mail it to you." Robert Tinkelman " I'll mail it to you... ", doesn't necessarily refer to envelopes delivered by the postal service; more and more it's electronic mail. E-mail. E-mail --- easier and faster than the postal service --- is an important tool used by millions of people every day. It can be a short note to a co-worker in the same office, or a business report sent to a correspondent halfway around the globe. Many of you work in environments where you can exchange E-mail only with people in your own organizations. But as systems become more interconnected and networks grow, your E-mail world expands. The trend is clearly away from isolated islands of E-mail connectivity. The islands are being connected and E-mail is flowing between them. You may receive E-mail that has originated with many different mail systems: from VMS systems using VMS Mail, ALL-IN-1 and Digital MailWorks; from PC LANs using cc:Mail and Microsoft Mail; from UNIX systems using MH, mush and Z-mail; and from IBM mainframes using PROFS. These are but a few of the possibilities. In each instance, the sender enters the body of the text and one or more E-mail addresses, and dispatches the message. A short time later (hopefully) the message is being read by the addressees. Simple. Simple? Hardly. It might look simple to the sender and recipient. In fact, it should look simple, but making it work that way is a complicated and difficult task. There are numerous products, protocols and networks. Each has a unique way of viewing the job of sending a message and getting then to work together in anything approaching a seamless manner is a difficult task. This article discusses some of the issues involved in taking the many different E-mail systems and making them play together. Only a small percentage of us are or will be postmasters and gatekeepers of the electronic mail world. However, all of us are E-mail users. As users, you should be shielded from the underlying complexities. But the world isn't perfect. It's important to have enough of the behind-the-scenes picture to understand what is happening when things don't go perfectly, when the translation between E-mail systems is not totally transparent. Basic Ways E-mail Flows Between Homogeneous Systems One of the simplest examples is VMS Mail between VMS systems. If you use VMS Mail and specify a destination address using double colons (such as MATH::JONES) then VMS MAIL delivers the message to a user on a remote system (in this case use JONES on DECnet node MATH). VMS MAIL utilizes DECnet's Mail-11 protocol. As soon as you type the destination address, Mail establishes a DECnet link from your process to the Mail DECnet object on the destination system. VMS Mail's system of immediate delivery provides an immediate confirmation of message delivery. However, there is a cost. The destination system must be up and running with a network connection available when you sent the mail. ALL-IN-1 and MailWorks utilize DEC's Message Router to provide a store-and-forward mail delivery service when transferring mail between systems. Using destination addresses similar to To: SMITH@A1@CHEM the route the message travels is specified. Eventually, the message is delivered or you will receive a notice delivery has failed. The store-and-forward delivery mechanism provides robustness but gives up immediate feedback on the success or failure of delivery. Cross system E-mail in the UNIX world started with UNIX-to-UNIX copy (UUCP) or cp, but quickly spread and is the method by which files (such as E-mail messages can be copied from one machine to another over standard dialup modems. Addressing occurs via ``bang paths'', containing the names of all the systems which need to be traversed, listed in order, and separated by exclamation points (``bangs''). A typical uucp address might be uunet!ccavax!bob. As suggested by the addressing format, uucp delivery is a store-and-forward process, with the corresponding pros and cons. Currently, the most common method of E-mail transfer on UNIX systems (and the standard method on the Internet) utilizes the TCP/IP protocol or, more precisely, the Simple Mail Transport Protocol (smtp) over TCP/IP. Internet mail addresses are usually specified in a form like bob@camb.com In this example, camb.com is the name of a system, and bob is a mailbox where the message can be delivered. Typically, the mailbox is a logon user name. Using smtp, a sending system transfers a mail message by making a TCP connection (analogous to a DECnet link), directly to the destination system. There are no intermediate store-and-forward nodes involved. In most implementations, when a user sends a message, his or her mail agent queues the message for an smtp agent to send at a later time. This provides the user with the benefits of a local store-and-forward facility. TCP/IP is available under most computer operating systems, and most TCP/IP implementations provide smtp agents (to send messages) and daemons (to receive them). There are a number of different mail systems in use on PC and Macintosh networks. The structure of these often involves a file server to hold messages sent by one user, but not yet read by the intended recipient. The mail user agents read and write files on the mail server using the PC LAN's file access protocol. There are many other mail systems and protocols other than those mentioned so far. There are a number of mail systems, such as PROFS, in use on mainframes. There are public E-mail providers such as MCI Mail, AT&T Mail, Compuserve and Prodigy. E-mail Between Heterogeneous Systems What happens if you are using one mail system, and the person with whom you want to communicate is using another? Can you send mail to the other system? You can if a number of things have happened. There must be a communications path between the computers, and software running which understands both mail systems. Finally, you need to know how to address your mail. Mail gateways are one way to accomplish this task. Let's take a look at some examples of mail gateways. Most ALL-IN-1 installations include a facility to allow ALL-IN-1 users to exchange mail with VMS Mail users. The gateway function is usually performed by MRGATE (formally, the VAX Message Router VMS Mail Gateway). MRGATE uses the DECnet Mail-11 protocol to communicate with VMS MAIL DECnet objects on one side, and communicates with the Message Router on the other. It translates mail addresses between the two formats. A VMS Mail user can send a message to an ALL-IN-1 user using addressing like To: MRGATE::"A1::SMITH" The ALL-IN-1 user can send to the VMSmail user with To: JONES@MRGATE Since VMS Mail and Message Router mail are different, the gatewaying process is imperfect. For example, ALL-IN-1 and the Message Router support the concept of a multi-part message; VMS Mail does not. A multipart ALL-IN-1 message, will have all of its parts concatenated when delivered to a VMS Mail user. When MRGATE delivers a VMS MAIL message to the Message Router, it always contains a single message part. There are a number of options open to gateway mail between VMSmail users and smtp mail users. First you need a system that understands both TCP/IP and DECnet. Independent of the platform used, the situation is more complicated than with MRGATE. There is generally a higher demand for addressing transparency. In the MRGATE case, you could look at the address being presented to VMS Mail or ALL-IN-1 and tell if the addressee was on the other side of the MRGATE gateway. In the smtp/TCP/IP world, an attempt is made to hide this level of detail or at least not reflect it in E-mail addresses. The philosophy is an address should reflect the recipient's location, not a particular route to the location. Consider the case of mail being sent from a system supporting both smtp/TCP/IP and also Mail-11/DECnet as mail transports. Users should be able to use an address like bob@red.camb.com without specifying the transport to be used. The mail software on this node will be controlled by a set of configuration files specifying what systems are reachable via which transport protocols. In real life, nodes that gateway between different mail networks need to deal with much more than differences in addressing formats. The gateway programs and their configuration files can become very large and complex. There are a number of different E-mail gateway programs in common use. On UNIX systems, sendmail is the program most often used for this function. Under VMS, for simple cases the package often provides sufficient functionality. For complicated cases a specialized E-mail routing package is used. The two most common are PMDF, a commercial product of Innosoft International Inc., and MX, a public domain package available from the DECUS library and other sources. If your organization needs to gateway mail among more than a small set of different E-mail systems, there is a distinct danger of falling into the n-squared trap --- with a special purpose gateway for each pair of systems. The standard way around this problem is to pick one mail system or mail format to use as a mail hub. This is the idea behind Digital's Mailbus product lines. Here the message routers form the backbone of the mail network. Certain mail systems, such as ALL-IN-1, interface with the message routers in their native mode. Others require gateways. There are gateways between message router and a number of other mail products and protocols --- including VMS Mail, PROFS, smtp mail, X.400, MCI Mail, Telex, and others. In this environment, a message from an MCI Mail user to a VMSmail user travels through the message router backbone, using two gateways, one from MCI Mail to the message router, and one form the message router to VMS Mail. Digital designed the message router to conform to a set of standards published by the United States National Bureau of Standards (NBS). These standards, with significant changes, have evolved into X.400 as adopted by the International Standards Organization. Digital's newer generation of messaging products are based on X.400 in place of the earlier NBS standards. An X.400 mail backbone is similar in many ways to a Message Router backbone. Some mail user agents will talk to the X.400 backbone components --- the Message Transfer Agents) --- in native mode. For most of the traditional mail systems, you will need gateways. For example, you have an smtp-to-X.400 gateway to effect interconnectivity with existing UNIX smtp mail systems. Instead of using either message routers or X.400 as a mail backbone, another common choice is smtp. This is the obvious thing to do if most of the traffic is smtp, and you are adding a few additional mailers. There are commercially available smtp gateways for most of the PC-based mail systems. Some of the more powerful mail gateway products, such as PMDF in the VMS world and SoftSwitch in the IBM mainframe world, can themselves be used as mail hubs. Utilizing PMDF in this manner, to add a group of cc:Mail users to the E-mail community, you would use a PMDF-to-cc:Mail ``channel'' as opposed to a cc:Mail-to-smtp gateway. Translation of Addresses There are a few easy-to-state requirements for the translation of addresses between addressing models on different sides of a gateway. The first is universality. It should be possible to reach any address on ``the other side'' of the gateway. The other is reply-ability. If a message arrives at the gateway with a working From:-address, then the gatewayed message should have an equivalent, working From:-address. People don't send mail in order to transmit addresses. People send messages to transmit the contents, the bodies of the message. The addresses are just a tool. When gateways need to process messages containing anything more complicated than ``plain 7-bit ASCII text'', things get more complicated. It is more difficult to specify the ``desired goals'' than it was with addressing. If you are planning on implementing a gateway you will need to give careful thought to the intended use. Who are the users? What sorts of documents will they want to transport across the gateway? All gateways transport 7-bit ASCII ``plain text''. But what about 8-bit characters? What will they do when different 8-bit character set encodings are used on the two sides of the gateway? What do you want to happen if a user sends a document edited in WordPerfect and a spread sheet prepared using Excel? Does it depend on whether the recipient runs the same software? Once the addressing problems are solved, these questions become the important ones. There are a number of different standards for constructing messages containing multiple parts of different types. Digital's Message Router does this. So does X.400. And so does the Multipurpose Internet Mail Extensio (MIME). Each has different capabilities. Sometimes there is an equivalent translation for a portion of a message. Sometimes there isn't. A gateway needs to deal with this in a way acceptable to the sender and the recipient. Mail Directories An important topic that has been ignored, so far, is the subject of mail directories supported by various mail systems. These are mechanisms to discover a person's E-mail address, when you know his name, title, job location or some other set of information about him. Examples of directory services include Digital's DDS for use with the message router and related gateway products, and X.500. X.500 was designed in part to be able to support addressing of X.400 mail users. Most Mac and PC mail systems support their own directory systems. It is natural for users of mail systems in a networked environment to use their own directory system to address recipients not only in their own but also in other E-mail systems. To support this, there needs to be a gateway mechanism between the directory systems. This is an area still in the early stages of standardization. Most of the ``solutions'' in the market are ad hoc and incomplete. Summary It's not as easy as it seems to send E-mail across systems and networks. There are lots of tradeoffs and choices for tools and implementations. And as more systems and networks become connected, the choices will increase. Ultimately, every user should be able to send mail from any one system to another, without knowing the details about the how it happens. A very good VMS implementation of UUCP is DECUS UUCP, available from a number of ftp sites and the DECUS Library. Robert Tinkelman is a longtime DECUS provider and has been putting together a number of mailservers for DECUS membership. You may contact Robert Tinkelman directly at bob@tink.com on the Internet. President's Column Marg Knox & Tom McIntyre As you can see, Geri has assembled another super collection of technical articles for DECUS '94. But here, we'd like to focus on Chapter directions. The companion piece to this column is the DECUS U.S. Chapter's Statement of Direction for 1994-1995. (Immediately following this article.) The D in DECUS is for "Digital" Digital remains our cornerstone relationship. We recognize we must also establish relationships with other vendors as needed. But let us emphasize - Digital is our key relationship. We continue to receive good support from Digital. Top corporate officers such as Bob Palmer join us at our national events to talk with members. Digital is currently working internally to greatly strengthen its local support for LUGS (Local Users Groups). When we asked Digital what they needed from us, we were very pleased to hear the answer: help Digital better understand the needs and concerns of members. This fits in nicely with our Advocacy direction. Advocacy Advocacy is one of the key differentiators -- what sets us apart from a generic user group. DECUS members and Digital employees have worked hard for many years to make better computing solutions. We are going to augment this grassroots activity in at least three ways. First, put in place a formal program for obtaining member requirements and Digital responses. Second, emphasize advocacy as a key deliverable for our national interest forums. And third, get individual DECUS member opinions out to the media. Electronic Services Another major direction for the Chapter is electronic services. Currently, we are negotiating with vendors for a contract to provide you with reasonably priced Internet access and a suite of basic Internet services. An electronic services architect is putting together a comprehensive plan for electronic DECUS services. We want to make it simple, convenient and practical for you to participate in DECUS services, daily, from your desktop. DECUS U.S. must become contributors to the national information highway. Stay tuned! Member Services Member services is another critical area. Jobs and career transience are now common in our industry. At our last national event, we had our first ever job listings bulletin boards. The tremendous number of attendees gathering around the two 'Employment Wanted' and 'Employment Opening' boards emphasized the need for this long-lacking DECUS service. We need to make this an e-service for jobs, resumes, consultants, and expertise location. We will work to provide a member directory so you can more easily find each other (note: putting information in the directory will be at your discretion). And we will investigate other services such as certification, credit cards, and insurance for the individually employed. We are also undergoing a restructuring of the Chapter. We must be able to react quickly in this fast changing world. We have moved from fixed, hierarchical structures to a flatter, more flexible adhocracy. We have also moved away from volunteers doing much of the time-critical work (e.g. we were spending many hundreds of thousands of dollars on leadership travel, communications, and meetings to plan member meetings). BUT we remain a volunteer association. Volunteers make the policy and do almost all of the technical content. Members directly helping members This is a major cultural change, controversial and painful. While we accomplished most of it under the existing Bylaws, we have submitted to YOU a restructuring of the top of the Chapter. We want YOU selecting the President as well as the at-large positions. We want YOU designating members for the election Search committee. And we want term limits set. None of the volunteer administrators should stick permanently in administration; we need to be part and parcel of the technical meritocracy of our Chapter. If you have questions, send E-mail to the Board at information@decus.org, (Internet). We've taken the first steps. It wasn't easy. We're making changes.... not all of them are completed. But in the end, we see a stronger DECUS, a more responsive DECUS, a technical society on your desktop. Let us close by congratulating the many volunteers who give seminars, symposia sessions, write papers, etc. Digital News and Review readers once again selected the DECUS U.S. Chapter as "Best in Training Services". We couldn't have done it without the technical talents and contributions of our volunteers. DECUS U.S. Chapter Statement oF Direction and Objectives The DECUS U.S. Chapter is an association of Information Technology professionals interested in Digital and related products, services, and technologies. The association's purpose is to promote the unimpeded exchange of information, thus helping its members and their organizations to be successful. The Chapter provides each member with professional development, forums for technical training, mechanisms for obtaining up-to-date information, member advocacy, and opportunities for informal discourse and interaction with professional colleagues of like interests. To meet the needs of its membership, the DECUS U.S. Chapter develops and maintains relationships and alliances with vendors, consortia, and other professional groups. Chief among these is the Chapter's relationship with Digital, characterized by ready access for the members to information from Digital and to Digital engineering personnel, as well as by an open channel of communication for Digital to hear member issues. Among the objectives of the DECUS U.S. Chapter are: o To serve as a powerful advocate for the needs of its members through relationships with Digital, other vendors, industry consortia, and the trade press o To facilitate interaction among its members through electronic media, including member forums, Internet connectivity, and on-line repositories of member contributions o To provide venues for face-to-face interaction, including training sessions, member presentations, informal discussion groups, and social activities o To provide a range of support services for individual members To achieve these objectives, the Chapter maintains a full-time professional staff, supplemented by a variety of contractors as needed. The activities of these professionals are carried out under the guidance of a board of directors elected by the membership. The Board is assisted in this role by several committees of interested members, which report to the board on various activities; the structure of these committees changes with the dynamic needs of the membership. Digital's Historical Collection "A collection of all things Digital has made, both the good things and the bad, so people in and outside the company can study and learn from what we've done in the past, and understand where the company has come from and where it's going." -Ken Olsen Nestled in the pine trees, at a Digital site in Marlborough, Massachusetts is the 2500 square foot, Central Collection of Digital's Historical Collection. Visitors view computers from the PDP series - 'coupling the computer in real-time to the users' mental processes..'. the VAX series - 'the first single system software and hardware architecture' and MicroVAX II series -- 'VAX-on-a-chip... the personal minicomputer'. The use of exhibits, period photographs, text recordings and computer artifacts, truly brings Digital's past to life. The Collection is ever-growing. Customers and friends have helped locate rare and significant - often still in use - systems and generously made them available to the Collection. The Digital Historical Collection is located in the lobby of Digital's Facility at 2 Results Way (MRO2), Marlboro, Massachusetts. For more information on visiting the collection, contact Digital's Corporate Archives at (508) 493-6924. Digital Historical Collection Program, Four Results Way MR04-2/C16, Marlborough, MA 01752 DECUS'94/New Orleans Seminars, Symposium, Trade Show May 7-12, 1994 Ernest N. Morial Convention Center It's a multivendor-multiplatform world. Succeeding isn't easy. The challenges are greater than ever. You can't afford to implement the best guess solutions anymore. What is groupware? What are the options in application development? What are the latest advances in network operating systems? How do they fit into my current environment? How can I put all the various E-mail products into a seamless solution? How do I begin to unravel client/server computing? What does Internet access bring to my company? How do I manage it? What's new with OSF/1, Alpha AXP, Windows NT, PATHWORKS, NetWare and many of today's major platforms and products? For the New Orleans event, DECUS has put a number of topic tracks to help answer your questions. Presented in a new enhanced format, DECUS '94/New Orleans combines seminars, symposium sessions, a Digital Technology Center and DECUS Trade Show. This is a high-quality, content- focused program. It's easy to interact with other attendees, presenters, product developers, and industry experts outside of the seminar and session setting. New "Track" Program The core program is built on several "topics tracks". It's easier to find the information you need. Scheduled topics track include: Application Development: Developing quality systems for increasingly complex environments; selecting the right tools; organizing the development activity; C to C++ migration; meeting the demands of client/server environments; porting legacy systems. Avant Garde Technologies: Delivering and supporting imaging, groupware (including Digital's new LinkWorks product), workflow systems, multimedia, and compound document architecture; managing the data explosion. Client/Server: Strategies and technologies for managing legacy systems; opportunities to reduce costs of moving to client/server; design and implementation of client/server solutions; Digital's client/server strategy including products like DECADMIRE, DEC RALLY, and RTR. Internet: Making, supporting, using, managing, and exploiting the connection; access and services on OpenVMS, UNIX and PCs; using and understanding tools like Gopher, FTP and Telnet. Networks: Fundamentals; an introduction to networks and LANs; understanding bridges, route's, and hubs; high speed networks FDDI and ATM; large-scale networks; network operation hints and kinks; migrating between DECnet Phase IV and DECnet OSI, linkages. Network Operating Systems: Fundamentals, an introduction to design and operations; a PATHWORKS Spotlight, a NetWare Spotlight. OpenVMS: The latest products and releases; a System Manager's Spotlight, system and performance management on the VAX and Alpha AXP platforms; OpenVMS to AXP migration. UNIX, ULTRIX and OSF/1: An introduction to UNIX, commands and utilities; UNIX for Masters; OSF/1 Developers Track; OSF/1 for OpenVMS System Managers. Windows and NT: Beyond personal productivity tools; NT Developers Conference, internals of Windows NT and 32 bit programming on Alpha AXP and Intel platforms; information on integrating NT into existing multivendor networks; Windows 3.1 and Windows for Workgroups; Windows for distributed computing and production systems. Keynote Speakers Digital CEO Robert Palmer returns to DECUS to deliver the keynote address and to provide an update on the company's strategy and future direction. Bud Enright, Digital's vice president of Client/Server Software delivers an address on "Implementing Client/Server Now". In this product update Mr. Enright presents the underlying technologies and critical open standards incorporated into newly announced Digital products. Sam Fuller, Digital's vice president of Research, discusses the "Information Revolution" and Jerry Baker, senior vice president Product Lines for Oracle contemplates "UNIX in the Enterprise: The Right Choice For Mission Critical Client/Server Systems?" The Digital Technology Center and DECUS Trade Show provide an opportunity for hands-on problem solving using the latest technologies, From fundamental to advanced, featured technologies include Alpha AXP, multimedia, OpenVMS, storage, networking, systems management, application development, transaction processing and much more. More than 100 leading companies participate in the Trade Show including: Digital; Xerox, WRQ, Software Partners/32, Oracle, SAS Institute and Ross Systems to name a few. The early-bird registration deadline for DECUS '94/New Orleans is April 20, 1994. Register by April 6, 1994 and qualify to win a Bermuda vacation for two. If you haven't received materials on DECUS '94/New Orleans, call 1-800-DECUS55 today or send an E-mail message to Information@DECUS.Org (Internet). Introducing DECUS ClassPass A DECUS Educational Futures Program Helping Members Invest in Their Future Through Education DECUS ClassPass is a unique training opportunity allowing members of the DECUS U.S. Chapter to take advantage of courses offered by Digital Learning Services (DLS) at specially discounted rates. DECUS ClassPass provides: - a choice of over 500 standard course offerings - software lecture/lab and seminar style courses - both standard and customized training options - an experienced staff of training professionals - flexibility. Attend classes at Digital Training Centers or conveniently schedule training at your own site. How DECUS ClassPass Works Through a special arrangement with Digital Equipment Corporation's Digital Learning Services Group (DLS) the DECUS U.S. Chapter has purchased a number of training passes good toward any Digital Software lecture/lab course or seminar course. By purchasing these passes in volume, DECUS is able to take advantage of specially discounted prices, which are then passed along to the DECUS membership. While savings vary depending on the original price and length of individual courses, the DECUS ClassPass program allows participants to purchase classes at a discount. Training Subscriptions Subscriptions for DECUS ClassPass training are sold for a fixed price of $1320 You may use this pass any time within a 12 month period. There is a $90 service fee per purchase order regardless of the quantity of subscriptions purchased on a single order. For course selection assistance, members must contact the DECUS ClassPass Account Representative at 508-493-6996 DECUS ClassPass Training Options In addition to participating in regularly scheduled DLS courses, DECUS ClassPass participants may also take advantage of the following group training options. On-site Training The On-site training option is available to any LUG with access to a suitable training facility in their area. In addition to the price breaks associated with this group rate option participants can also - save time and travel expenses - enjoy the convenience of local training - customize the course to meet specialized training needs - perform lab exercises on members' systems. Exclusive Training Group lecture/lab courses can also be scheduled at a Digital Training Center. Similar to the On-site Training Option, course content can be customized to meet the need of participating DECUS members. Fees are also the same as those for the Onsite Training option. For information about the program, the nearest LUG or your LUG Advocate, call 1-800-DECUS 55 1-800-322-8755 DECUS ClassPass may not be used toward enrollment in DLS hardware courses or Corporate Leadership Forums. These special rates are available only to members of the DECUS U.S. Chapter. The following are trademarks of Digital Equipment Corporation: ALL-IN-1, Alpha AXP, AXP, DEC, DECtalk, DECnet, DECNIS, DECUS, DECUS logo, DELNI, DEMPR, DEPCA, DECADMIRE, DEC Rally, Digital, Digital logo, eXcursion, LAT, LinkWorks, OpenVMS, PATHWORKS, TeamLinks, ObjectBroker, ULTRIX, VAX, VMS. OSF and OSF/1 are registered trademarks of the Open Software Foundation Inc., UNIX is a registered trademark of UNIX System Laboratories Inc., a wholly- owned subsidiary of Novell, Inc., cc:mail is a registered trademark of cc:mail, Inc., NETWARE and Novell are registered trademarks of Novell, Inc.; MS-DOS and Windows NT are registered trademarks and Windows is a trademark of Microsoft Corporation; Intel is a trademark of Intel Corporation; MAC is a registered trademark of Apple Computer, Inc.