{nonref}Analogies with Other Systems
{nonref}Things that Gnosis Does Differently
Reclaiming Storage
"Sub pool" is a term borrowed from OS 360. When one asked for storage one could specify a small integer designating a subpool. Another call would free all of the storage allocated within a designated sub pool. This scheme also helps avoid the fragmentation that normally occurs after explicit freeings of storage. A real 2K piece of main storage is devoted exclusively to one sub-pool.
Explicit freeing is where the program that allocated the storage includes code to explicitly deallocate the storage. This is typically the most efficient but requires more code to be written. It also requires code to sense when the allocated storage is no longer required. This may, in fact, come to pass before all references to the allocated storage have been lost. Explicit freeing may thus recover storage sooner than garbage collection.
{arcane}The Synthetic Domain Key Question
There are several reasons why the real domain key must never be used when there is an OS simulator keeping a domain. These same arguments probably extend to DDT's as domain keepers and other domain keepers. This discussion might best be titled "considerations for domain keeper programmers" or "domain key etiquette". See (syndom) for one way to get a synthetic domain key.
One reason for not using the real domain key when a keeper is involved is that the keeper can be displaced with a DOMAINSWAP+2 call on the domain key. This call could have disastrous effects on both the operation of the domain and the state of the domain keeper.
Another reason is that, with the real domain key, it is possible to operate on the domain at the same time that the domain keeper is. If both "users" of the domain key decide to modify the PSW at the same time, the results are unpredictable because each "domain keeper" has inconsistent status of the domain.
The current technique for solving both problems is to have a synthetic domain key which is a start key to the domain keeper {in the OS simulator case}. This solves both problems, because the domain keeper can intercept the DOMAINSWAP+2 call and make sure that it is the first domain to receive controls when the domain traps, and as the only legitimate holder of the domain key, requests to the domain key are serialized.
The problem introduced by this technique may be worse than the original problem. Now there must be a synthetic Domain Creator key so that calls to the domain creator will return the synthetic domain key instead of the real one. This is perhaps only the beginning of the problem. Several months ago this problem was unknown, and while there are programming technologies that solve the problem now, there may not be such technologies one level removed {synthetic Domain Creator Creator Keys?}.
A Proposal
To become the keeper of a domain one must first examine and possibly call the current domain keeper key. If the domain keeper is not DK(0) then DOMAINKEEPER(2 ==> 0;,,,DOMKEEPEREXIT) will return with the called domain keeper busy (it has called back) and a guaranteed situation where the new domain keeper can swap domain keeper keys and put its own start key in the domain keeper slot. When the domain keeper swap is complete and the new domain keeper is ready to allow the domain to run (in case it has trapped in the mean time) the DOMKEEPEREXIT is FORKed so that the old domain keeper can be made ready for traps.
A similar procedure is used when a domain keeper wishes to step out of the way.
What does the OS simulator {or any other} do when there is no further domain keeper in the chain {DK(0)}? The OS simulator counts on being the first in the chain so that if it is unable to handle the trap it passes it on to the next {DDT} domain keeper. When the OS simulator installed itself as a domain keeper it most likely displaced a virtual domain keeper that will connect to some DDT. Is it too much to ask that this be the way it is handled? Is it too confusing to the "user" to have two DDT's on a program? After all, the only people able to install DDT's will be the builders. Don't they know better? There must be a better resolution to this difficulty.
Second, a new domain keeper can provide to an old domain keeper a key to call if the old domain keeper does not want the trap. In this way a strange trap would seek a DDT which would stop handing back traps. Traps would go through a DDT to an older domain keeper perhaps all the way to the end of a chain. The trap would then be handed back to each domain keeper in reverse order until one of them decides to handle it on the way back. What if there is no one willing to handle it? What happens now?
See (p2,synkeys) about this.
We enumerate here some of these uses, at least one of which seems essential.
Efficiency (Conserving Time)
We use such schemes to pickup data that has arrived from the network and is needed by the user's process.
If some space bank is to have sub-banks on either side of some security barrier, a program on the top side could buy many pages and sell back ones whose key had some particular bit on. A program on the bottom side could buy pages and notice that that bit was usually on in the pages key. If the keys are inscrutable, the sole perceptible mutable bank attribute is the number of pages (and nodes) available therefrom and even the bandwidth provided hereby can be greatly limited by using the bank limit feature {(p2,lc)}.
The bits of a key may change over time. Indeed they recently stretched from 64 bits to about 88. It is difficult to write programs that work right across such changes. We made this effort in the case of the KID.
Design Bugs {serious design problems that require some effort}
Specify behavior of KID upon disappearance of object designated by a key of the KID.
Independently it may be feasible to allow limited cascading of the keys with a limit similar that imposed on meter trees.
Another idea that might be incorporated here is the "process distributer". There might be several start keys in the node and the kernel would apply some strategy to select one to be invoked. The kernel would supposedly select a start key that lead to an available domain whose root was already in core. To combine this with the above idea would require a pair of slots, one to hold the start key and the other to receive the extra passed key. A format key reminiscent of the segment format key could control this.
In service of a FIFO queue the above might be used as follows: The first key would lead to the primary consumer of the queue. When that was busy the second key leads to domain A of a queue manager. Manager A accepts and queues the message and forks the return key to the producer and then returns to a resume key to domain B of the manager who holds a start key to the real consumer. B calls the consumer and is blocked until the consumer becomes available. Domain A accepts messages for the queue and B delivers them asynchronously until the queue becomes empty whereupon B calls its resume key to A which turns off the bit in the queue switch.
One implementation of this idea looks just a bit like the distributer idea above {(dist)}.
A one level tree has one node which is numbered 1. An L+1 level tree has its top node numbered 2**L and the nodes of its left sub tree numbered as an L level tree and the nodes of its right sub tree numbered with offset 2**L.
A tree key will report its level L. T(0,(16,M);==>L;TS) returns T's level L and produces key TS to T's sub tree at M if 0 < M < 2**L. Otherwise L=0 and TS=DK(0). T(1;TS==>c,(16,M)) identifies TS as a sub tree of T if it is and returns M, the position of TS within T.
One technique is to allocate an array of 2**20-1 pointers and buy a level 20 tree T. The 2**20-1 level one tree keys under T can serve to denote these respective pointers by using the response to the identify operation on T as an index into the array. The pointers in turn directly locate the object representation in virtual memory.
Note that such trees cannot be reused so that it is important that there are many of them. 2**88/2**20 is very many. You can buy trees but not sell them. Perhaps they should cost about 10**(-6) cents per bottom node. About 10**18 dollars would be derived from the sale of all of the trees.
There are three common categories of source for the data to be emitted: Exotic IO device, internal segment, data decompression algorithm. I shall not discuss the algorithm here. The ideas at (lcc) may bear on that issue.
Digital paper has gotten some press lately and can presumably store data at CD like densities with hundred or thousand foot lengths, which pushes 10**13 bits per reel. This is based on a print like technology where mastering is as old as the hills. Micron like print features are unfamiliar to printers but they seem to be possible. Such devices are fast enough and their speeds are controllable within adequate limits. Some form of forward error control and error toleration should be suitable.
Imagine a generalization of a channel program that involves two (at least) devices and is a (read A; write B) loop. This could run for many minutes thru checkpoints. A might be an exotic input device and B a writable CD. The channel program might need a count variable.
If the input data were in a segment it becomes more complex but still feasible. The channel program might be "FOR i TO 100000 DO read_cda(i); write OD". This presumes that the producer of the data held a range key so as to put the data there. Special precautions would also be necessary to ensure sufficient access to the kernel's swapping channel, controller and device. This is not trivial but it is not impossible.
One must capture the states of changed pages and nodes but not necessarily simultaneously. Relativity replaces the concept of simultaneity (for some purposes) with that of a light cone. The light cone divides space time into two parts, early and late, such that no late event can effect a early event. The checkpoint requirement is that no checkpointed state of one page or node be causally earlier or later than the state of another. If a message flows from A to B then B's subsequent state is causally later than A's prior state.
The flexibility of this generalization can be used two ways. The checkpoint work can be further spread. The kernel must prevent an checkpointed object from influencing an uncheckpointed object. When such an attempt is made the kernel either defers the action or checkpoints the uncheckpointed object. Which is a strategic issue that needs to be influenced somehow by the application. Perhaps an attribute of a superior meter might be the clue.
The result is that parts of some applications may exempt themselves from the pauses due to checkpointing. Perhaps they can even cause local checkpoints strategic to their own needs. It is a time travel exercise to design a local checkpoint.
Infusion of new versions of software into a sensitive production application must be carefully managed. On rare occasions existing objects must be repaired. It may be that the bug has not yet damaged the state of a particular object but there is a chance that it will. If the state of this object is important, special steps may be necessary. Some types of objects support copying.
.....