A Perspective on Keykos

Conventional systems, even the then new ‘timesharing systems’, made too many assumptions about how the hardware was to be used. There were too many ideas about how to use computers that we and our customers had that the hardware supported but that the extant kernels precluded.

An engineering goal of Keykos was to subject a great deal of OS function to some protection discipline so as to allow safe rapid development in that area so as to implement these new capabilities (pun intended). Capability discipline was the only protection mechanism we knew that could do this. The resulting kernel design could be described as a tight fitting glove over the hardware—the same shape as the hardware yet ‘virtualized’ in several senses.

The conventional virtual machine was then being commercialized by IBM and we exploited that to develop our kernel. But while notions of virtualization applied ours was not a virtual machine plan. Conventional virtual machines then and now do not allow sharing memory between VMs except in a few very restricted cases that occurred to the VM designers.

The Keykos domain was essentially access to the problem mode (non-privileged) mode of the CPU.
The Keykos segment was a construct to build flexible memory access that the new memory maps provided and that the conventional kernels monopolized and emasculated.
Kernels also thought they knew best how IBM’s various IO devices should be presented to the user, often limiting functionality that the kernel designer had not understood.

Capabilities, or more recently named ‘object capability’ design patterns allowed new and experimental abstractions, built upon these raw hardware features, to be presented to the application programmer, or for that programmer to roll his own if he knew better. All of this function, normally in the monolithic kernel, may be developed without impacting the integrity of other mission critical applications on the same machine that the experimental app must share for timely access to data. This was very successful in the case of a data base benchmark where a modest amount of low-level code was able to harness the real hardware to address the benchmark requirements. Conventional data base systems were held at an offset from the hardware by conventional monolithic kernels. We performed better than any other system that we could find data on.

Flexible access to raw hardware function meant that the affordances of conventional kernels could be built one at a time or packaged together to suit the needs of complex valuable software built to rely on such affordances. Compilers and many small shell line utilities of Unix could be supported with a spartan set of Unix kernel calls. Applications composed of many processes communicated as the Unix designers planned could be supported as well but we did not move very far in this direction. We ran the binary relocatable version of X11 without even recompiling. A Keykos segment could have another simultaneous face as a Unix file for sharing between the new and old worlds.

A whole class of novel security arrangements between mutually suspicious users became possible.