This is a continuation of Calculation Order.
See Mark Miller’s
description for this pattern in E.
The event loop first came to my attention, by that name, in
the design of interactive window systems.
It can be traced, however, back to command line interpreters
which became common with timesharing systems.
The idea is that there is a task to be done but that the user takes
the initiative.
The system undertakes to execute brief commands, specified
by the user and then wait for the next command.
This pattern was at the heart of most early timesharing systems.
Of course sometimes an extensive calculation would be required and
the command would not finish quickly.
This was the genesis of what Unix came to call the shell.
Some applications, such as the editor, used this pattern too, even when
the current application state was not manifest on the screen.
Remember teletypes.
Debuggers and some dynamic language systems used the pattern.
Some languages such as Scheme are defined in context of a Read-Eval-Print loop.
The nominal goal was to produce a program in a flat file, but with
immediate response to experiments by the user to determine consequences
of changes to the program.
Without the manifest state on the screen the user would ask for portions
of the state to be printed, as you now scroll.
Some very early “interactive” programs would supply their own agenda and
frequently ask the user’s decision on some arcane point.
If the user did not understand the point of the question the user’s only
choice might be
to abort the process in order to use the computer to probe
the meaning and ramifications of responses the question.
This was not good.
With the introduction of graphic displays (CRTs) enhanced this pattern.
it became especially
suitable to let the user take the lead for it was hopefully graphically
evident to the user what remained to be done towards an ultimate goal
that the computer need not be aware of.
This plan required the program to support a richer set of states
for the user to navigate thru, and more state transition options.
It was necessary, for instance, to let the user insert a character
in a file implicitly moving subsequent characters.
For many tasks this was not a burden.
Keykos uses its domain construct to be a protection domain.
It can occupy as little as 5KB in fairly realistic situations.
It can be invoked in as little as a few hundred machine cycles.
Its primary message passing protocol is synchronous or blocking.
The CPU that obeys a program that sends a message, as in a call,
next obeys the privileged code that transfers the message to the recipient
in another protection domain, and then typically continues immediately
to obey the recipient’s code that interprets the message.
The sender typically awaits a message which returns by a nearly
identical process.
Each of these messages consume just a few hundred machine cycles
to cross protection boundaries.
What is significant here is that this mechanism can be used to support
the familiar subroutine pattern in terms of which
our store of Computer Science algorithms are understood.
Our fairly substantial experience with Keykos did not lead to frequent
deadlock problems.
This is not surprising for
the semantics of our call mechanism closely matches that of Fortran
whose call semantics rule out reentrancy and recursion.
Keykos provides provides recursion with
another pattern and reentrancy is naturally provided by multiple
domains which obey the same code and incidentally provide the logical
separation that is typically indicated for other reasons.
Massively parallel systems such as blue gene have yet other requirements.
Experience at Livermore with progenitors of such a system indicate that
it is very difficult to to get a significant part of the theoretical
floating point performance for physics mesh calculations.
This reminds me of the early experience with Atlas with the first demand
paging system.
Problems where signals traveled rapidly about the mesh produce
many page faults which leave the CPU gasping for useful things to do.
Atlas did multiprogramming which would let the CPU work on one job while
the I/O responded to page fault for another job;
but multiprogramming cuts down the real main store available to
a particular application; a cruel dilemma.
The same tradeoff afflicts the blue gene design.
If the tradeoff there indicates multi programming then multiple stacks
within one CPU are indicated.
Adapting Keykos to such hardware would not be done by mapping Keykos domains to CPUs. That would be fatal.
It would seem that some sort of system software technology is indicated
for massively parallel systems for just the same reason that it is indicated
for uniprocessors.
We may need multiprogramming to attain more than a few percent of
the theoretical performance of the hardware and putting multiprograming
in the application is bad for many familiar reasons.