Multics > Library > Articles
24 Apr 2006

Library Processes and Tools

Olin Sibert, Gary Dixon

[Olin Sibert]

I was one of the librarians at MIT early in my career (around MR5/MR6, I think), and I specified a lot of the requirements for what should be in the MR11 tools for satisfying the B2 requirements, but I never looked at the details. This is what I recall about the process and tools. I don't know what happened after the ACTC development transition-- it wouldn't surprise me if they ported some of the Unix source control utilities, since, after all, they brought us the C compiler.

In development, source control was manual: as developers, we were responsible for keeping track of versions and merges, and for keeping track of who was modifying what. We had nothing like SCCS/RCS/CVS that I knew of, although SCCS certainly existed for Unix systems during the later years of Multics development.

The merge_ascii program was essential (thanks again to Bob Mullen). Maybe it's just because I learned merge_ascii first, but I have yet to find another merge tool, visually-oriented or otherwise, that makes the merging process as smooth.

We had nothing like Makefiles, either, that I can recall. Although the librarians had a script-generating tool (see below), that tool wasn't generally available or used by the developers. The make process, however, was easier on Multics than in many systems:

pl1 ([segs *.pl1]) -ot
archive u bound_foo_("" .s).archive (* *.pl1)
bind bound_foo_

took care of it for most changes. For complicated builds, it was not uncommon to write special exec_com scripts to do all the right steps for compilation, updating, naming, etc., but that was often not necessary.

Because we didn't have make, we didn't have any automatic dependency handling--that was dealt with manually (often) or by simply recompiling everything (less often). The cost and time of compilation was a significant burden in the 1970s, so there was considerable incentive to keep track of what needed changing, and especially to avoid incompatible include file changes. Of course, this sometimes didn't quite work, and caused the most bizarre behavior in testing until one typed new_proc and recompiled everything after all.

When something was done and ready for "service", we'd fill out yellow installation forms identifying all the segments and the approved changes (MCRs, MTBs) that they represented. Those forms would then go to MIT (or, after MIT dropped out, to Phoenix) where the librarians would install the software on the running service system. I believe the paper installation forms were largely replaced with online forms around MR11 (that, and other changes, were made to satisfy the Configuration Management requirements for the B2 evaluation).

The installation forms were the major gateway for change management: the librarians wouldn't accept anything without evidence of approval, either in the form of an approved MCR from the MCR board stapled to the form) or, rarely, from "management" (in case of emergencies of one sort or another). Installation forms also controlled the peer review / code auditing process: to submit an installation, you had to get someone else to read and agree with the implementation, and both the submitter and the auditor had to sign the form. Audit approval often involved some back-and-forth about the details and some additional code changes. This casual approach worked well for modest changes--it was not uncommon for someone to walk down the hall and ask "can you audit this change so I can submit it today?". Auditing for big things like "Install V2 Fortran" was usually a more structured activity, done throughout the development rather than just as an ad hoc request at the end.

This installation process, of course, compiled and built everything from scratch, based on the developer's instructions, and it was not unusual to have an installation form kicked back because something was missing or couldn't be built outside the developer's own environment. This step was a major quality control check on the developers.

There were installation management tools. Custom tools, gof and pinst, essentially collected the information from the yellow forms into a machine-interpreted form and created the installation log entries. They created scripts to compile and bind stuff, although those scripts often had to be manually edited or augmented to handle special aspects of an installation.

Those tools also created the installation script that drove the update_seg tool (which was part of the product, unlike gof and pinst). The update_seg tool created a database of all the changes that had to be made (segments added, deleted, renamed), then executed them in one big--and reversible--batch. Sometimes they did have to get backed out, and the ability to do that with update_seg was priceless--I think it must have been a nightmare to do that manually or with scripts.

At MIT, we published (as Multics Installation Bulletins, MIBs) descriptions of everything that got installed, usually on a weekly basis for non-hardcore changes, and one for every new hardcore tape (those, of course, were not installed online). We've scanned several hundred MIBs from the MIT files.

The old versions of everything installed in this manner were kept around for "a while", but eventually got cleaned out. While they were there, they could be used (with compare_ascii) to identify the exact lines of code changed when there was a problem. Usually, however, such problems would get resolved by asking the responsible developer, not by actually hunting them down. I think old versions got cleaned up when space was short, not on any release-related schedule.

In MR11, I believe (but never personally observed) that we added a bunch of capabilities/tools to manage the "change comments" that were in the source files so that they always identified the source of the change (e.g., MCR 4421), the originator, the approving authority, etc. Thus, the source file itself contained a fairly detailed--and machine parseable--record of the changes. That record could be used to generate reports, identify related changes, etc.--but it didn't identify the actual changed lines of code, just the nature and purpose of the change and, usually, enough hints to find where it had been made.

The library tools were for fetching stuff from the source libraries, but didn't provide much management capability beyond that. The get_library_segment program was fast and straightforward, but it was the "old way" and not quite flexible enough to satisfy all requirements--it worked only on the system libraries in >ldd. The later library_fetch and similar tools (which were completely different in implementation, not just a simple rename) were more sophisticated and worked well for managing private libraries, but they were slow and ponderous. I think most developers preferred get_library_segment because, after all, they were working on "the system", and it worked fine there.

Additional processes took place in Phoenix to create the Multics release tapes, which were largely a copy of the >ldd hierarchy, but were cleaned up somewhat and tested beyond anything that happened during the service installations. This was where things like the ability to BOOT COLD were verified, because it would be a darn shame if you couldn't install it. At MIT, we never installed the standard releases, so there were often some details that were a little out of sync with what customers had, but I think the Phoenix service system did install them in addition to making live updates to the system.

[Gary Dixon]

The objectives for update_seg were mainly to provide a reliable mechanism for installing a related set of changes (perhaps including renaming, ACL changes, adding and removing segments from the visible library, etc) in a manner that minimized the apparent time of installation (period of library inconsistency), so that changes could be installed in libraries while users were actively using the pre-install versions of the new/changed programs and libraries. A second goal was to make such installation steps reversible, so that completed steps of a failed installation could be safely (and automatically) undone if an installation problem was detected.

There were several reasons for leaving older versions of files around for some period after the installation was completed. The most important was that running user processes would be using these older program versions for some period of time after the installation; so the older segment versions could not be deleted out of virtual memory while still in use.

Also, this mechanism for temporarily saving older file versions was part of the overall reversible installation mechanism. Nothing was permanently deleted until the entire installation unit was successfully installed and tested. An installation could be reversed several days after completion, so long as its "permanent deletion" step had not been completed.

Source

[THVV] Source code for the library tools as of MR12.5 is available online.

The MR12.5 code at MIT dates from 1999, years later than the early 80s time frame described above. Some of the features described may have been improved or replaced in subsequent changes to Multics.

24 April 2006, revised 14 Feb 2013