Multics > Library > Articles
04 Aug 2024

Code Auditing

Paragraphs by Tom Van Vleck unless noted.

The Multics development process used the term "audit" to describe peer programmer review of the source code of every change before it was accepted for installation into the system.

Programs were audited after coding, clean compile, and unit test by a colleague with some knowledge of the program's subject area, before submission to the system build group.

It was considered part of a Multics developer's duties to audit the work of others.

What Code Auditing Looked For

Program audit tried to assure that programs were

One source of guidance for how programs should be written was AN82, Multics Standards SDN, last published in June 1980.

[Gary Dixon (GCD) wrote:] Another source for guidance were the precedents established by prior MCR Board decisions on software installed earlier. These were documented by published MCRB minutes, published documentation describing user interfaces, techniques used in installed code, etc.

MCR Board meetings often discussed how programs should work, including matters such as control argument processing for commands, correct use of language features, and command language conventions. Sometimes, these issues led to the creation of new standard library routines, described first by MTB, then in Subroutines Manual, internals design manuals (PLMs), and Subsystem Writers Guide type manuals.

CNO used an audit checklist (example thanks to Phil Dreyer).

Benefits of Code Auditing

Besides finding and removing bugs in code to be released, auditing had a number of indirect benefits. Programmers had incentive to take extra care making their code bug-free, readable, and maintainable, because they knew that their programs would be reviewed. Auditing benefited the development team as a whole, by encouraging interaction and communication between team members, sharing of programming skills and techniques, and rewarding individual contribution. Auditing benefited the product by supporting standardization of best practices, and knowledge diffusion within the team: culture, standards, lore, designs, and implementation.

Audit worked for the Multics development process, because we operated in a culture where programmers were encouraged to be unselfish, and where other reviews had already been done on the project:

.. freeing code review (audit) to focus on implementation.

Costs of Audit

There was a cost to the team in reviewer time. (This cost was tolerable for 1:1 review with minimal record keeping.) Resources and attention spent on review competed with other needs: a project that was behind schedule might perceive auditing, and dealing with audit issues, as additional delay. There was some initial concern that auditing might discover long lists of trivial issues, but because our process was informal and person-to-person, this didn't happen.

Geographically distributed teams could have additional problems, if programmers and auditors were at different locations. We rarely tried this.

History

Code review practices in the Multics development process changed over time in several ways. The initial 1960s approach was informal. As the system grew, and we learned from our mistakes, we added process safeguards: one of these was audit. In the 1980s, the B2 effort added more process requirements, and altered the way we did audit for code inside the Trusted Computing Base. (Not all kernel changes were subject to special B2 audit/testing procedures. Only those code sections implementing B2 security, access control, access audit trail, and authentication and authorization were subject to additional auditing.)

Auditing began as an informal practice in the mid 1960s. I remember that Noel Morris and I often practiced a kind of "pair programming" in those days, because we had only one terminal to share. One of us would type a line of code but not hit RETURN until the other said "OK" or "wait, I see a problem." We would routinely ask each other to look over new code when we were working on CTSS, and we continued this practice when working on bootstrap1 and other parts of Multics bootload in 1967. Other Multics programmers at Project MAC and GE established similar customs. The idea of reviewing each others' code grew, and we learned from each other and from mistakes, during the 60s and early 70s.

We started requiring code auditing in the Multics process sometime around 1975. There was some resistance, but we overcame it through peer pressure and leadership by example. (We had never heard of any other code review practice, such as Fagan-style code inspections or of structured walkthroughs; we made our process up by ourselves.)

HOW TO AUDIT

Here is the text of hand-drawn viewgraph slides for the talk I used to give to Multics programmers on code auditing, annotated with my recollection of what I said for each slide. This is a late version, dated 7 Aug 79, so that means it was the talk I gave in Phoenix to the PMDC developers.

Roles

When your program's functions are complete, and it compiles cleanly, and has passed some testing, you'll want to ask a colleague to look it over to see if there's anything else that needs to be done before installation.

Programmer and auditor cooperate to ensure

  • Function and efficiency
  • Consistency
  • Maintainability
  • Future extension

Auditing is not an adversarial process. We are all on the same side here.

The following are not laws or regulations; they are the best suggestions we have come up with so far to accomplish our goal of making Multics high-quality.

Programmer

  • Writes
  • Tests
  • Prepares submission package

Some programs have more than one author, but there's one person who takes on the task of submitting the code for installation. Often, this is the author of the Multics Change Request (MCR) covering the change.

It's easy to test simple programs, like stand-alone commands. For programs that live deep inside a complex environment, it's more difficult to prepare a test environment that exercises every path through the code.

Auditor

  • Examines package
  • Resolves problems with programmer
  • Iteration may be needed

Sometimes an auditor sits with the programmer and they examine the code together. More often, the auditor starts with an audit package including listings, documentation, and comparison output, looks over the submission, and then discusses findings with the submitter.

Often, this isn't a single iteration: the auditor has some concerns, gets more information from the programmer, and looks deeper. If the auditor and programmer find a bug or better way to do something, the programmer will fix it and update the package. Sometimes the programmer will propose that changes recommended by the auditor should be postponed to a later time; auditor and programmer will continue discussion until both agree.

Who Audits?

  • Knowledgeable colleague
  • But not co-author
  • Crucial programs get multiple audit
  • Some people have the knack

It's best to bring in someone who is not too close to the code, in order to get a fresh pair of eyes. Sometimes it is hard to find an auditor, because a subsystem may have a lot of architecture and design decisions that constrain how the code must be written. Auditing spreads the knowledge of how the system works among more people. In the long run, having more people familiar with the code strengthens our team and leads to a more robust Multics.

If an audit would take a lot of work, there may be team members who don't have time to do it well. When this situation happens, it's best to expand the discussion to more team members and look for another way to get the audit done. Perhaps an expert can be asked to look at particularly risky parts of the programs, and another person can do the general audit. Maybe having a walkthrough meeting with several people will work. Doing an inadequate job of auditing or covering over the problem is the wrong solution.

Crucial hardcore modules, especially those that are security related, should be audited by more than one person. Sometimes one auditor is really good at performance issues, another at security, and another at making the code meet standards.

Monte Davidoff was one of my favorite auditors. His intelligence and deep programming knowledge, and his depth of understanding, made each audit an opportunity to improve my programming; and his cheerful and kind presentation of suggestions was something I looked forward to. (I remember one time when he showed me that old-fashioned REPL-like PL/I code that used the ADDR function to overlay one BASED structure on another could be replaced by strictly legal PL/I declarations using multiple REFER attributes. The new code was simpler and easier to understand, more efficient than the old, and didn't depend on compiler implementation.)

[Monte Davidoff (MND) wrote:]

I viewed auditing as a negotiation process. Correctness was non-negotiable. However, I viewed more subjective comments (e.g., programming style, maintainability, future-proofing) as suggestions. Sometimes I would let these slide, often for a promise from the developer to fix the issue next time, depending on how much time the developer had and how receptive the developer was to the comments. The goal was to consistently improve the code with each iteration. It was sufficient for the code to be better than it was before: perfection was not required. Taking this longer view of the auditing process removed a lot of stress, for both the auditor and the developer.

Auditing Process

Standards

  • PLM AN82
  • MCRB policies
  • MTBs
  • Good practices
  • Evolve in time
  • Common sense

One thing an auditor always checks is that the code conforms to whatever standards apply. The Programming Standards SDN, AN82, has a lot of advice. Many more detailed practices are contained in Multics Technical Bulletins (MTBs), or were discussed in MCRB meetings. None of this is absolute: our understanding changes over time, we discover exceptions, and we keep improving programs. As we read each others' code, we also learn good practices and style, and incorporate them into our own code; some of these improvements should be added to the standards.

Auditing Practices

Function & efficiency

  • Programmer debugs
  • Auditor reviews

The rule of thumb is that the auditor is not expected to rewrite the code: that's the programmer's job.

An easy audit may take a few minutes for the auditor. A quick glance, yup, clearly does exactly what the MCR says it should, sign the form.

On the other hand, a complex change may require an auditor to spend time studying the code and the change request, understanding its relation to other programs, learning the subsystem's architecture and constraints, and interacting with the submitter.

Esthetics

  • Informal
  • Evolves
  • Analogy with other parts of system

The auditor may also comment on esthetic considerations such as formatting. Messy, hard-to-follow code will cost us in the long run. Long stretches of uncommented code may work right, but represent a future maintenance burden. Our understanding of what "good code" looks like is relative to the subsystem it's a part of, and will change as the system evolves and the language changes.

Documentation

  • Updates to manuals
  • Info segments

The submission package includes updates to the manuals and online documentation, and these should be considered as well as the code. This is an area where a second opinion is especially valuable.

Auditing Steps

Submission package

The submitter prepares a package for the auditor that has all the information needed to do the audit. The yellow form is the "table of contents." The description sections in the MCR forms for the change will describe the intent, and it's reasonable for an auditor to think about whether the change accomplishes what it set out to do. Listings are the real meat of the package. Standard compilers produce a cross reference showing where variables are used, and this information can help an auditor understand whether a program does what it ought to.

If the change contains modifications of programs already checked in, the submitter should provide comparisons of the new code to the old. It may take some work by the submitter to provide useful comparisons, if lots of code was repackaged or re-indented.

An explanation of how the programmer tested the change and what was observed is important.

Mechanical steps

Examine compare_ascii output
  • Sometimes enough
  • Break old parts of program?
Examine listing
  • Non-standard
  • Should unmodified areas be upgraded?
  • Warnings, etc.

Checking that the package is complete, and that everything mentioned on the MSCR is present, is the first thing an auditor does. If there is previous code to compare to, reading through the comparison listings and identifying what the changed lines do is the next phase. Sometimes this is enough. An auditor wants to make sure that a bug fix won't introduce a bug somewhere else, so often it is a good idea to consult the variable cross reference section of the program listing.

Auditors also look at the new program as a whole, considering whether the code needs to be brought up to standard in areas beyond that in the MCR. Our standards change over time, and when one change is made, it may provide the best opportunity to make others. (John Gintell used to call this "turning the plant.")

[John Gintell (JWG) wrote:] I think it would be good to point out that good comments, good choice of variable names and subroutines, etc. were emphasized as things to look for. Programs will be read and possibly modified by many people during their life so clarity is very important.

For programs that have no prior version in the system libraries, the auditor will examine the new code thoroughly, looking for standards compliance, understandability, appropriateness, and maintainability.

Each program should compile without any warnings, should have adequate commentary so that future maintainers can understand what it is doing, and so on. Auditors should look for and comment on such issues. The programmer might be unwilling to spend the time to deal with all of them, and a respectful dialogue often ensues.

Harder Auditing Steps

  • Similar to others?
  • Structure OK?
  • Boundary cases?
  • Fail gracefully?
  • Restart & embed?
  • Cost & size reasonable?

The auditor can go beyond the mechanical steps, and really dig into the program, when this is appropriate. Instead of patching old code, it may be time to upgrade its mechanisms, bring it up to standard, and make it more robust. In some cases, the very least that should be done is to add a long comment to the code describing what should be done when resource availability allows.

Sometimes an auditor points out that multiple programs all do the same task; it may make sense to restructure the program so that the common processing can be provided as a utility package that would simplify many programs.

One area of program upgrade applies particularly to system commands: they should be "surroundable" by exec_coms or subsystems, meaning that they should do I/O through mechanisms that allow the environment to redirect or pre-supply data, such as command_query_, should handle ON-conditions correctly, and so on.

Testing

  • Auditor may test
  • Auditor may ask programmer to test
  • Run standard tests with profile
  • Other approaches

If a submitted program is easily run in a user process, it's reasonable for the auditor to try it out to get an impression of how it works, or see if a bug exists.

Testing is an area where we were not thorough enough in Multics in the 70s. Some subsystems had well-designed testing disciplines, but we could have done much more. The B2 effort in the mid 80s included building substantial testing machinery.

References

M. E. Fagan, "Design and code inspections to reduce errors in program development," IBM Syst. J., vol. 15, no. 3, 1979.

IBM, Inspections in Application Development Introduction and Implementation Guidelines, GC20-2000-0, August 15, 1978

Michael E. Fagan, "Advances in software inspections," Software pioneers: contributions to software engineering, pp 609 - 630 Springer-Verlag New York, Inc. New York, NY, USA

John Gintell, John Arnold, Michael Houde, Jacek Kruszelnicki, Roland McKenney, and Gerard Memmi. Scrutiny: A collaborative inspection and review system. In Proceedings of the Fourth European Software Engineering Conference, Garwisch-Partenkirchen, Germany, September 1993.

Google Mondrian, http://www.niallkennedy.com/blog/2006/11/google-mondrian.html (Guido van Rossum's project from 2006).

Original slides 7 Aug 1979 THVV
page created 7 Feb 2016 THVV
comments from JWG 9 Feb 2016
comments from GCD 14 Feb 2016
comments from MND 13 Mar 2016