Multics 843 entries
15 Jul 2024

Glossary - S

Glossary of Multics acronyms and terms. Entries by Tom Van Vleck ([THVV]) unless noted.

Index| A| B| C| D| E| F| G| H| I| J| K| L| M| N| O| P| Q| R| S| T| U| V| W| X| Y| Z|
safety switch
Segment attribute that controls whether a segment can be deleted.

[BSG] Set of programs for reconstructing directory and file system integrity after a crash. Until NSS, the Salvager was a separate tape booted after a crash; built of Multics supervisor parts and running entirely in ring 0, it scanned and repaired the file hierarchy instead of coming up to serve users. Eventually, more and more of its functionality was moved to code that can run while the system is up. When a crawlout occurs, locked directories are salvaged before returning to the user ring. The salvager does more or less what Unix fsck does.

[BSG] (for "System Controller Addressing Segment") A supervisor segment with one page in the memory space described by each SCU (or other passive module) in the system. The SCAS contains no data, is not demand-paged, and its page table is not managed by page control. Instructions directed specifically to an SCU as opposed to the memory it controls are issued to absolute addresses in the SCAS through pointers (in the SCS) calculated during initialization precisely to this end. Typical of such instructions are cioc (issue a connect), smic (set an interrupt), and the instructions to read and set SCU registers (including interrupt masks). On the 6180, rccl (the instruction to read the SCU-resident calendar clock) did not require an absolute address, but a port number, obviating the need for the SCAS to be visible outside ring 0 to support user clock-reading as had been the case on the 645. The SCAS, which is a segment but not a data base, is an example of exploiting the paging mechanism for a purpose other than implementing virtual memory.

Program for finding records on a physical volume "lost" from its record stock due to crashes, and making them available again in the volume map. Written by John Bongiovanni.

[BSG] Used ambiguously, as in today's operating systems, to denote either the piece of software that manages multiprogramming or the module that allocates time slices to users. Since in Multics these are the same program, "pxss" (process exchange, switch stack), this was not a problem. This very large, complex assembly language module was highly optimized, including by many non-modular hooks into page control. Source is online.

[THVV] The Multics scheduler began as a Greenberger-Corbató exponential scheduler similar to that in CTSS. About 1976, it was replaced by Bob Mullen's virtual deadline scheduler.

Mathematical programming facility produced by SCICON, Ltd. for Multics in 1986.

Secure COMPuter. Secure front-end processor, done by Honeywell Aerospace in Tampa, first system to get an Orange Book A1 rating. Used to be called SFEP.

[BSG] (for "System Configuration Segment") A wired ring 0 data base, initialized from data in the config deck very early in Collection 1, describing the overall system configuration, i.e., the CPUs and other active devices and their connections to the SCU's and their status. The SCS' metier is knowledge of which ports of what are connected to each other, as well as how much address space is described by each SCU, which are interlaced with each other, etc. The SCS is used heavily and modified during system dynamic reconfiguration.

Among the data in the SCS is an array of pointers into the SCAS indexed by CPU port. See the entry on the SCAS for more on related topics.

Multics site: SCSI. Southern Company Services, Inc., Atlanta GA. Nuclear fuel inventory. Installed 1982.

[BSG] (1) System Control Unit, or Memory Controller. The multiported, arbitrating interface to each bank of memory (usually 128 or 256 KWords), each port connecting to a port of an "active device" (CPU, GIOC or IOM, bulk store or drum controller). On the 645, the clock occupied an SCU as well. The SCUs have their own repertoire of opcodes and registers, including those by which system interrupts are set by one active unit for another. (See connect.) The flexibility of this architecture was significant among the reasons why the GE 600 line was chosen for Multics. See SCAS.

(2) Store Control Unit instruction. The 645 and 6180 instruction that stores an eight-word encoding of the processor state (other than register contents and the state of the EIS unit on the 6180) at the time of an interrupt or fault. Because of potentially lengthy indirect chains and instruction modifiers with side-effects (increment-on-reference pointers), instructions must be restartable in the middle. Only some of this data is used by the software, and there are bits that are by and large not understood by the system staff: most is used by the processor upon restart. With the EIS unit data, the processor state took 40 words to encode.

Segment Descriptor Word. An element of a process's descriptor segment; the hardware-accessible data element that defines a segment and the process's access rights to it.

search rules
The dynamic linking mechanism looks for segments in a set of directories specified by the user process's search rules. These rules are per-process and stored in the RNT. In addition to specifying a list of directories, there are three special values that can be specified:
specifies the use of already initiated segments (this rule should always be first)
search the current working directory (which might have changed since the search rules were set)
search "the same directory as the one holding the procedure that took the linkage fault." This is a subtle brilliance, that helps subsystems find the version of code that they are packaged with.

There is a search list facility, which generalizes the search rules to work for multiple different uses. One's "compose" search list, consisting of multiple search paths, can be different from the search list used for object segments.

second system effect
As described in The Mythical Man-Month by Fred Brooks, the Second System Effect causes the developers of a successful system to overload their next system with features. Some say Brooks had Multics in mind when he named this syndrome. It is worth noting that Multics did not stop at being a second system. 6180 Multics was easily a third system, a nearly complete rewrite with vast simplifications. The NSS changes were a fourth system. Another major set of changes was in progress to build transaction processing with write-ahead-log into the virtual memory system, about the time Bull killed the system. Story: Phase One.

Security was one of the basic design requirements for Multics. Access control, supervisor integrity, and passwords were present from the beginning of the design. The Access Isolation Mechanism was later added to support non-discretionary security. As a result of these features, Multics was sold to customers concerned about security, such as the military and government sites. Multics received an Orange Book rating of B2 in August, 1985. Article: B2 Security Evaluation. Story: How the Air Force broke Multics security.

security-out-of-service switch
The "soos" switch is set if the salvager detects a directory whose AIM classification is lower than its containing directory. Such an entry could be used as a write-down path. When the switch is set, neither the directory nor its contents can be referenced. The Site Security Administrator can reset this switch.

Support Equipment Data Acquisition and Control System, application hosted on McDonnell Douglas site in Long Beach CA. Supported the C-17 aircraft.

[BSG] Jargon for "(missing) segment fault". What the hardware takes when a segment number is used for which there is no valid SDW. The latter is not necessarily an invalid or error situation, in fact, it is the default situation when a segment has just been made known. Segfaults are resolved by looking in the process's KST (table of segments known to it) to find out what segment is intended to be referenced, and the AST searched for that segment's page table to construct the SDW. If not active, the segment has to be activated, perhaps requiring some other segment to be deactivated to make room for its page table. When segments are deactivated, segment control invalidates the SDWs of all processes having SDWs for it, to start this tale at the beginning again.

User-visible subdivision of a process's address space, mapped onto a storage system file. Each segment is 1MB long, and has a zero address. The term "segment" is used interchangeably with "file" -- except not really: the things that are files in other systems are implemented as segments; also, the term "file" includes multi-segment files, and when talking in terms of COBOL, PL/I, or FORTRAN language runtime objects, one speaks of files. Programs are spoken of as stored in (procedure) segments. Correct use of the terms "file" and "segment" is a sure sign of a Multician.

Segment Loading Table
[BSG] (SLT). A database created and maintained by initialization, vestigial and deciduous during system operation, that describes all of the segments on the boot tape (see collection), including all of the names to be associated with them needed to pre-link the supervisor, access attributes in their SDWs (see REWPUG), length, bit count and the like. The image of a the SLT entry (SLTE) for each segment on the boot tape, generated by the MST generator program, precedes it on the tape.

[BSG] Division of a process's virtual memory into a vector of vectors, each such vector being a segment. The idea came from the Burroughs 5500 series. Different segments can have different access rights, including for different users, and other differing attributes. Multics's use of segmentation is as a basis for what are now called "memory-mapped files", hence each segment is a file and each file is a segment, in some sense, the basic idea of Multics. See "The Multics Virtual Memory: Concepts and Design."

segment control
[BSG] Ring 0 software responsible for the management of segmentation, the allocation and deallocation of page tables, the connection and disconnection of the SDWs of processes from the page tables, and the performing of operations (such as truncation) upon active segments. Tightly bound up with, and requesting service of page control, in the "New Storage System" (NSS), segment control is tightly bound up with VTOC management as well.

IBM mechanism for typewriters and computer terminals. The character set was on a plastic "golf ball" that struck the paper through a typewriter ribbon. This mechanism was used in the IBM 1050 and 2741 terminals, in various third-party terminals, and in the console typewriter of the 6180. There were three grades of this mechanism OEM'd by IBM, light, medium, and heavy duty; the 6180 console had a heavy duty model.

Multics site: Société Européenne de Propulsion, Vernon, France, near Paris. 1986-1990. Ariane rocket engines.

Series 60, Level 68
Marketing name for a repackaged 6180. Later called the DPS-68.

System Environment Test. Honeywell Phoenix organization.

[WEB] Secure Front End Processor; original name of the machine that eventually became SCOMP.

The Multics command processor used to be called the shell. This program is passed a command line for execution by the listener; it parses the line into a command name and arguments, locates the command and initiates it, and calls the command program with arguments that are PL/I character strings. It is simple to replace the default system supplied shell with a user-provided program, by calling cu_$set_cp (see abbrev). A Unix shell includes the concepts of both shell and listener in the Multics sense.

Louis Pouzin's story of "The Origin of the Shell" describes the genesis of the concept.

An early implementation of the Multics command language is described in MSPM section BX.1.00.

The accounting system defines up to 8 shifts, which can start at any half-hour boundary in a week, at site option. User Control charges user processess' resource usage against the slot for the current shift in a per-user vector in the PDT for the user's project.

Terminal devices usually have a Shift key that causes a keyboard to transmit different characters when depressed or not. The term is more generally used to describe any mode change in an output device, such as shifting from black ribbon to red.

shriek name
[BSG] Multics has a convention of converting unique IDs into character string names: >pdd>!BqrHmpZZtL was a typical pathname of a process directory, generated from a 36-bit process ID. These were called "shriek names," because a colloquial pronunciation of the exclamation point was "shriek." The unique_name_ subroutine reduced the alphabet to sixteen characters to eliminate the possibility of obscenities: all vowels were removed, "v" because you can use it to look like an "u", and "f", of course, and "y" because it's like a vowel, and 2 others. The initializer's process ID (777777000000) always came out !zzzzzbBBBBB, which was thus always the name of its process directory. Rich Lamson suggested that this was Hebrew for "Fly of the Lord" (the Initializer's function, in some sense, cf. "Beelzebub", Heb. for "Lord of the Flies.")

Software Installation Bulletin. Installation instructions for each Multics release.

PL/I name for an exception. PL/I signals have continuation semantics; that is, a condition handler can return to the point of the signal. The Multics environment maps hardware events, e.g. zerodivide, linkage fault, out-of-segment-bounds reference, into PL/I signals. When a signal is raised, the runtime searches back up the stack for a condition handler for the signal, and invokes it if found. If no handler can be found, the runtime starts over, looking for a handler for the condition "unclaimed_signal". Each listener on the stack establishes an unclaimed signal handler. If invoked, this handler prints a message and "caps" the stack, establishing a stack frame for a new "command level" by calling a new listener. Issuing the start command to the new listener returns to the signal handler, which returns to the point of error and retries the instruction that faulted.

For linkage faults, this setup is ideal: if your program calls a non-existent subroutine, the dynamic linking mechanism will signal "linkage_error," the signaller will search the stack for a handler, and finding none, will come to command level and let you write the missing routine, compile it, and then type start to invoke it. Issuing the commands

resolve_linkage_error segname$entryname

patches the machine conditions in the most recent signaller frame and retries a linkage fault, so you can specify a different segment or entrypoint name, and thus redirect a call if you made a simple spelling error.

The answer command establishes a condition handler for command_question and executes a command line under it; when a program asks a question by calling command_query_, this subroutine signals command_question. The signaller finds and invokes the answer command's handler, which supplies a canned answer to the question (by modifying the arguments to signal_) and continues execution. Thus, for example, one can say

answer no delete **

to delete a lot of files without interaction, supplying a "no" answer if any questions are asked. Similarly, the on command can establish a handler for any condition and execute a command in the scope of the handler, and take specified action when the condition is raised.

Simulation language, descendant of OPS-4, compiled into Multics PL/I. Language designed by Prof. Malcolm M. Jones of MIT Sloan School.

[BSG, Frankston] MIT's Student Information Processing Board. A student organization founded in 1969 by Bob Frankston, Gary Gut, David Burmaster, and Ed Fox. The original purpose was to provide access to computers for students, which translated into buying time on the online systems, in the days when computers were big and heavy and cost millions of dollars. To be honest, SIPB was the Student IPB when the computer center was run by the big IPB. SIPB used what is now called soft money: that is, IPSC allocated resources to SIPB by giving it a dollar amount it could spend -- only at IPSC. Of course, as any organization does, SIPB tended to become a social group as well as a service organization. Many of the young Multicians of the late 70s were active SIPB members.

SIPB's relationship to Multics was complex: by virtue of being among its greatest admirers and promoters, they were at once among its most demanding and insistent critics. Overall they were enthusiastic, though SIPB even gave out CTSS time to a user (Paul Green!) for WTBS (now WMBR).

Before Athena, SIPB provided computer time on the Multics project Student, and operated the Educational Calculator Service (ECS) subsystem, which permitted users to program in a BASIC-like language. The anonymous user facility was used to allow multiple limited service users. SIPB also provided a letter quality printer on Multics for text output and provided terminals to dormitories, fraternities, and the Student Center library.

Site Analysts
Heroes of Multics. GE /Honeywell employees who worked at the customer site and helped the customer use the machine, report problems, install fixes.

Site N
US National Security Agency site, Ft. Meade MD, 1980-1992. They didn't want it known that they had a Multics, so all the lists showed "Site N" and if you had questions, you were referred to some guy in FSO.

The internal name of the site was "Flagship."

NSA had another machine, DOCKMASTER, that was used for unclassified communication among security researchers.

Site Security Administrator
Designated security administration role at some sites. This individual manages the mandatory access control settings for users and data.

Multics site: St John's University (Jamaica NY). 1981-1989.

slave mode
Unprivileged execution mode of the CPU. See privileged mode.

See Segment Loading Table.

Segment Management Module. Early supervisor module, obsoleted before 1970; managed reference names. Sue Rosenbaum worked on it.

Multics site: Société Nationale d'Etude et de Construction de Moteurs Aéronautiques (Réau, France). 1983-1987.

String processing language implemented for Multics by Olin Sibert at MIT in 1977. Story: Multics SNOBOL and the Missing END Statement.

Multics has a sort/merge facility that was done as part of the COBOL support. Very batch-like, as a result of the language definition.

Multics customer. See Ministerie van Sociale Zaken

Software Release Bulletin. Release notes for each Multics release, describing what was changed and fixed.

US Naval War Games System, Software Support Activity, Newport RI. See NWGS site history.

See Standard Service System.

System Segment Table. Supervisor segment fabricated at boot, contains AST, and thus, all the page tables of all segments in use.

Store A (accumulator) Conditional (on memory operand being zero.) The 645/ 6180 instruction used universally in Multics to lock locks by storing a process 36-bit ID in a zero (unlocked) lock word. See RAR.

[BSG] Multics has a stack segment for the stack of each ring of each process (that has been referenced), in the process directory of that process. The ring 0 stack is actually (inevitably) a per-process data base of the supervisor. The need for call stacks follows from the PL/I basis of the programming environment, which determines the stack discipline. The stack contains data with the PL/I AUTOMATIC storage class. Procedures have stack frames - there is no single-data-item pushing and popping (and no instructions to support such). Stacks are kept to less than the full 256k lest wrap-around not be detected. The base of the stack segment contains a table of pointers to critical operators, e.g., the procedure entry sequence. During some fault and interrupt processing, the supervisor uses the upper end of the wired, per-processor data base, the PRDS, as a stack (See PRDS).

[THVV] Article: Multics Execution Environment.

[BSG] Store A (accumulator) Conditional on Q (quotient register) (being equal to the contents of the memory operand.) Pronounced "stack queue". Indivisible RAR instruction used by Multics to unlock locks: Multics stores a zero in the word if and only if the content is currently the current process's 36-bit process ID; if it is not, the supervisor or SCU RAR handling is broken. STACQ is general enough to simulate, with a little software ingenuity, any other real or desired indivisible storage update, and was often used to generate unique ID tags, event counts, and the like as well as unlocking locks. See RAR.

standard service system
Commands and subroutines provided with Multics, necessary for use of the system. Stored in >sss, >sl1, and >unb. Notice the mention of subroutines. Unlike many other systems, Multics makes available a large library of utility routines used to construct the standard commands. These routines are shared by all programs that call them by the dynamic linker.

[BWS] Sharing system subroutines provided two synergistic elements of cohesion for the Multics community. First, the writers knew that they would be shared, so the tendency was to do a better job defining and writing them. Second, because they were easily available and well-written, they got used and incorporated in many user programs. Since they worked the same way as the system commands, the overall effect was to increase the consistency and integration of the overall Multics environment (down to user programs). If there is one system subroutine to highlight this effect it would be convert_date_to_binary_. Unix needs this so bad it hurts.

Along these lines, someone should mention OS source code availability. If you wanted to use a system subroutine but it didn't quite do what you needed, the source code was available for instant modification, or at least examination to see how the experts did it. Of course Unix had this too, at first, but it seems to have hurt them more than it helped given the proliferation of versions ("Unix has too many fathers").

star convention
[BSG] Convention of representing the names of multiple entries in a directory with asterisks. Each asterisk matches either one component of the trailing characters of a component: foo.*.pl1 matches foo.a.pl1, but not foo.pl1. ** matches any number of components: foo.** matches foo, foo.pl1, and foo.pl1.old. Although there are subroutines to do star matching and searching, it is the responsibility of commands to call them; star expansion is not done by the shell. See also equal convention.

Attached associative processor, built by Goodyear Aerospace, used on the 645 and 6180 at RADC in the mid 70s. Had 1000 1-bit CPUs that could be combined under program control to do vector operations.

According to Gregory Wilson's History of Parallel Computing, in 1972 "Goodyear produces the STARAN, a 4x256 1-bit PE array processor using associative addressing and a FLIP network."

The RADCAP project studied the use of the Staran associative processor for AWACS radar processing.

The Staran daemon ran with a load setting of 2.

PL/I storage class. Storage declared STATIC is initialized once per process.

static section
Region of the combined linkage segment where STATIC storage is allocated.

Multics site: Standard Telephones and Cables (New Southgate, North London, England). Originally sold as an Intellect machine. See the STC Site History for more information.

Stratus VOS
See VOS.

subsystem utilities
Library of utility routines for programs that read requests from the user and executed them. Pioneered by Doug Wells in his version of send_mail and made part of the standard system by Olin Sibert and Gary Palter in 1978.

Program written by the Project ZARF team to test whether hardware protection mechanisms always worked. Ran as a background job trying various invalid operations to see if the hardware ever failed. One problem it found was described as follows: The subverter "found a hardware flaw in the GE-645: if an execute instruction in one segment had as its target an instruction in location zero of a different segment, and the target instruction used index register, but not base register modifications, then the target instruction executed with protection checking disabled. By judiciously choosing the target instruction, a user could exploit this flaw to gain control of the machine. When informed of the problem, the hardware vendor found that a field service change to fix another problem in the machine had inadvertently added this flaw. The change that introduced the flaw was in fact installed on all other machines of this type." (Paul A. Karger and R.R. Schell, 'Multics Security Evaluation: Vulnerability Analysis,' ESD-TR-74-193, Vol II, June 1974.) Story: How the Air Force broke Multics security

The last period-delimited component of an entryname. See the list of suffixes.

Service Universitaire pour l'Information Scientifique et Technique, Ministère de l'Enseignement Supérieur et de la Recherche. A network service that made information available on Minitel. Installed 1984. Located in L'Isle d'Abeau, near Grenoble. A 1985 BITNET listing says "FRSUN71 SUNIST Bourgoin Jallieu" and by 1990 SUNIST still existed on BITNET but was no longer using Multics.

[Jean-Paul LeGuigner] SUNIST was (does not exist anymore) a service (with a computer center) in charge of indexing libraries catalogs and providing access to universities libraries.

[BSG] That portion of the operating system which runs in ring 0, a hardware concept on the 6180. The supervisor is loaded from the boot tape, and, like the rest of Multics, consists of segments, but most of them, although they (mostly) are paged, exist outside of the file system. Having SDWs in every process, the supervisor forms a part, the same part, of every process's address space, occupying its lowest segment numbers. The supervisor's SDWs are permanent, and never incur segfaults. Some parts, e.g., the ring 0 stack and the KST, are actually different segments in each process, although with identical segment numbers. The supervisor can be thought of as a shared, secure domain common to all processes: this gives it the ability to develop and store pointers that are valid across processes, a capability notably lacking from the Multics user rings.

[BSG] Performance enhancement scheme implemented about 1977 for a Multics Benchmark for Data Communications Corp. of Memphis. Based on various ideas being circulated by Bob Mullen and Steve Webber, swapping was a response to the oft-aired slap, "The reason your system is so damned slow is caus' alla those pages!" Other, less "general" or "elegant" time-sharing systems which swapped user-core to contiguous disk-tracks and the like were frequently more responsive.

The scheme was to allocate critical pages of a process's working set on one contiguous run of disk blocks, and transfer the whole at the time a process gained or lost eligibility by a single massive scatter-gather I/O operation of the IOM, which implemented a type of control word (IONTP, "I/O non-transfer and proceed") that facilitated skipping unwanted, unmodified, or non-resident pages in these contiguous swaps.

Bernie Greenberg designed and executed the baroque, hirsute implementation at CISL, which required radical modification of NSS at every level of page control and segment control. At the peak of implementation fervor, Charles Frankston, who was in Nigeria at the time, appeared to Bernie in a dream and revealed to his horror that the IONTP control words would write zeros to the disk, a fact that no one had theretofore realized, which largely deflated the whole scheme [THAT IS 100% TRUE].

Bob Mullen took swapping to PMDC for live combat. Needless to say, bugs were discovered. While the performance gains first looked encouraging, they soon became moot. Finally, after many days and hours of running and tuning, some directory revealed page control disease, indicating a subtle bug somewhere that was not likely to be found overnight as required. (The problem was tradeoff between complexity of feature, shakedown time, and delivery time). The scheme and the code were abandoned after the benchmark. See pre-paging.

[THVV] I remember that in the case of swapping, say a process needed 30 pages to work, we found that we got a savings of 29 page fault overheads per process eligibility. But this was canceled out by the need to allocate and hold a 30 page buffer before starting any work, and the increased memory pressure just about exactly canceled the fault overhead savings. The end effect would have been to add a lot of complexity and bug sites to the system for no performance gain, so we didn't.

Project name for system administrators, the people who registered users and projects.

Multics project name that daemon processes runs under.

Project name for system maintainers, the people who installed new software on the system.

System M
Multics system in Phoenix at CRF used by Honeywell employees for Multics development, benchmarks and for other Honeywell work. See the System M site history.

Systeme X
Multics system in France used by Bull employees for Multics support.

[Gerard Vanderschooten] System X was dedicated to the Bull support group activity (training, developpement, validation of new releases and modifications, etc). It was installed in Louveciennes (around 1982) where the Bull Support Group was install..and then moved to INRIA platform when Honeywell Bull decided to no longer develop new Multics platform and that the Bull Support Group was reduced (down to one guy..). The System X configuration was bi-processor, with one of each of the existing technologies (68 and DPS8-Multics) to allow training of field engineers with both ones. Story: Systeme X.