These are extensions of the kernel that are implicitly used by users of the system.
See also (kernel-logic,hook).
The responsibility of the scheduler is to administer the gross utilization of real resources and guarantee resources on occasion. Thus the scheduler is a program that handles a number of functions that normally reside in a kernel. These functions are not in the kernel because they are complex and we anticipate that they will evolve; indeed they are not yet completely designed.
See (gnosisdoc,scheduler,) for some ideas about the requirements of a scheduler.
The points below provide the conceptual background whereby meters are seen to provide the hooks required by the scheduler. They are some assumptions about the nature of a scheduler that were made as meters were designed.
The scheduler is in a position to keep track of the hierarchical relationship between the meters that it oversees to schedule since it created them all.
A new user is given {in his directory} a meter key to a multiplex meter {(p3,muxmet)} whose meter keeper is the scheduler.
A safe strategy for the scheduler to follow is the following: At one time select a set of meters R, such that the sum of the charge set limits for these meters does not exceed the number of real page frames. The scheduler can observe the charge set sizes about once a second and discover when one has shrunk so as to make it profitable to add new members to R. When charge sets increase to their limits the scheduler is invoked and can remove members from R to prevent thrashing.
It will be possible to debug new versions of the scheduler in a running system but we do not otherwise anticipate fruitful execution of more than one scheduler at a time. The scheduler, by the nature of its job is in a position to deny service and sense resource utilization, but being outside the kernel, it is otherwise outside the user's security kernel.
See (p1,priority) about a scheduling problem and a proposed solution.
The Returner leads to the Kernel. If the key is an exit to a node which is in core, in the swap area, or mounted, the kernel performs the return. (N.B. If the node is mounted but becomes unmounted before it can be swapped in, we must restart the jump to the Returner without waiting for the node to be mounted.) Otherwise, the kernel treats the jump as a jump to an entry to the External Returner. The External Returner is not yet implemented. [Loose End: How is the External Returner introduced to the Kernel?] This entry is guaranteed to be prompt under certain conditions spelled out below. The External Returner is not in the security kernel because anyone who doesn't trust it can use his own returner and not use anything that uses the standard Returner (but this class will probably include all the tool holders, for instance). In other words, the Returner can be synthesized without making use of the real Returner.
When the External Returner is jumped to, it sets up a domain (from a supply which it holds) to do a fork jump to the key, passing the appropriate parameters. Before doing the jump, it checks, using DISCRIM or other means, that the key is an exit key. If it is not, it does not do the jump. {Consequently exit keys are not synthesizable by entry keys.} {If we allowed returns to any key, e.g., an entry to a busy domain, The External Returner would need a very large amount of storage for the domains jumping to such keys; and it would have to hoard that storage, because it couldn't call the space bank to get it as it needed it, because that might result in a deadlock and the space bank might not have the space.}
If the key is an exit, the domain does the fork jump. If/when the jump completes, the domain returns itself to the supply. Meanwhile the main domain of the External Returner has been serving new clients.
If the supply of domains becomes exhausted, the External Returner prints a message to the operator and waits for a domain to be returned to the supply. (Design note: It would be possible to have the External Returner attempt to create a new domain in this case, using the domain tool and a space bank which is prompt and is not a client of the External Returner. But that would still leave the case in which space was not available from the bank, so there would be no real advantage.) We here establish a condition sufficient to guarantee that the External Returner never thus fails to be prompt. (It is clear that it can fail to be prompt in no other way.)
The supply of domains will fail to be promptly replenished only if all the domains fail to be prompt. They can delay only on the return jump. Since we know the keys being returned to are exits (or zero data keys), the only source of long delay is unmounted disk packs. If there are N domains originally in the supply, the External Returner will be slow only if it is processing returns to N nodes on unmounted packs. This can happen only if the number of domains on the pack which were in the middle of receiving service from a client of the Returner not on the same pack at the time the pack was dismounted is greater than or equal to N.
The number of such domains is certainly less than or equal to the number of exit keys, to nodes on the pack, held in nodes on other packs (mounted or not). The details of calculating this number are given in the appendix.
Before a pack is voluntarily dismounted, it is necessary to increase the supply of domains held by the External Returner by the number defined in the above paragraph. This can be done because the space bank is prompt. (Exercise: prove this.) If the space is not available, then the pack cannot be dismounted. [Is this serious?] Loose End: Who pays for this space? Operations? When the space has been created, we can guarantee that the External Returner will remain prompt when the pack is dismounted. I believe that the number of domains needed will not be large. When the pack is remounted the space can be returned to the space bank if desired.
There will be occasions when a pack is involuntarily dismounted. The External Returner should have a supply of domains sufficient to survive most such occurrences. The number of domains on the pack which happen to be receiving service from a client of the Returner at the time the pack goes down is likely to be small, since most clients of the Returner are prompt. (But I anticipate making the Returner generally available.) If this number is large, then the pack going down represents a hardware failure from which the system cannot recover without backing up or fixing the disk. If it is very necessary to keep the system running at the expense of the residents of the pack, then the operator could sever all the nodes on the pack using a range key.
An alternative way of handling voluntary dismounting is to treat it the same as involuntary dismounting. Then, if the External Returner becomes stuck, the operator could remount the pack, allocate more storage to the External Returner, and try dismounting it again. However, if a new pack has been mounted in place of the old, recovery may not be possible. Since the clients of the Returner are not necessarily prompt, there is not finite length of time one could wait to know that the drive can be safely reallocated. Consequently this method is not recommended.
The decongester is not a client of the Returner. Space banks are.
The decongester probably needs to use a timer, but it cannot use (p2,waitc) because that is potentially a client of the decongester.
Appendix: Keeping track of exit keys
For each disk pack i which has ever been mounted there are two integer variables maintained by the pager, called NTO[i] and NFROM[i]. Consider all exit keys in nodes of which there is not a copy in core {that is, item space}; in other words, all exit keys in nodes whose only copies are in the swap area or on disk. {There may be duplicate keys.} NTO[i] contains the number of such keys which refer to nodes on pack i. NFROM[i] contains the number of such keys which refer to nodes on pack i and which reside in nodes also on pack i. We have NFROM[i] <= NTO[i] and Sum over all i of NTO[i] = total number of such keys.
When a copy of a node is brought into core, the exit keys in it should be subtracted from the above counts. {There is never more than one copy of a node in core.} When a copy of a node is removed from core (after being cleaned if necessary), the exit keys in it should be added to the above counts.
The total number of exit keys to nodes on a pack held in nodes on other packs can be calculated from NTO and NFROM with a scan over all nodes in core. Loose end: Need a key that does this.
This design has the property that it does not introduce any overhead to operations on nodes in core.
See (p2,migrate).