This is the story of the big bang process of Keykos.
The big bang is in contrast to the much more frequent restart from a checkpoint which involves no act of creation but merely reading active material from disk back into RAM.
The big bang was originally designed to work well with VM-370 which was the development environment which preceded the stage where Keykos provided its own development environment.
We would load a kernel into the same place designed for normal VM-370 application code.
In the same address space was CMS which was a single user OS with no protection, running in its own virtual machine.
Along with the kernel was loaded:
- a few extra routines (called init code here) not included in the production kernel
- Item space produced by the assembler where the source was invocations of macros which described primordial pages and nodes.
Item space includes the core table and the node frames which hold nodes that the kernel accesses.
- Plist — a list of CMS files names whose content populated the primordial pages.
Plist named the pages by reference to the core table.
Init code read the Plist and created the core table entries for the pages holding the data from the files.
These macros included higher levels whose semantics hid most complexity of construction of primordial domains and segments.
The init code would read the plist to find file names and read that information using the CMS file system into page frames destined to become recognized as such by the kernel.
Item space was already loaded at suitable addresses.
This initial item space included node frames, already filled in with the unprepared primordial nodes holding unprepared keys.
The initial item space also included the core table and the ultimate queue heads for the kernel.
The init code also noticed nodes with processes in them and initialized the kernel’s CPU queue.
When the init code finished the kernel invariants were in place and the kernel was entered without further ado.
CMS and its data remained in RAM but were unused.
From this instant capability discipline was enforced by the kernel but the early domains had sufficient authority in the form of primordial keys to create the world.
These domains worked in patterns described here.
For too long we added function to the system by adding primordial domain code.
It was too easy to just create new function as we had created the primordial function.
It required a big bang each time we wanted to add new code.
The kernel ran in RAM this way for several months before it learned to use disks.
When it learned, it eventually took a checkpoint.
At that point we could kill the virtual machine and boot a kernel without the init code, item space plist and restart from a checkpoint.
This new kernel could run on a real or virtual machine.
Starting From Unix
When we went to the world where Unix was the boot development environment there was no powerful assembler macro processor as had come with the IBM world.
We did a faithful translation of the macros into C code that runs just after having been loaded with kernel.
This code had a global cursor for emitting nodes into item space and another cursor for pages at run time.
At the end the situation was much like having run the 370 init code.
At this point the system can run without a disk or even without a disk driver.
Most recently we used tftp which was implemented in the ROM of the small SPARC box we were using.
I was annoyed because of the way that the nature of the physical addresses available on the system was indicated to the newly loaded code; a linked list of contiguous address ranges was provided.
What we needed was information about which physical address bits selected dimm slots and how much space was in each slot.
This was the information we needed to form a core table index from the discontiguous range of physical address.
In both of these scenarios the first instant that kernel gained control was when there were many pages and unprepared nodes already in RAM, and an empty or absent disk.