Slot 13 of the domain root is documented as holding either DK(0) or DK(1) which seems a waste. In reality it often holds a hook key, which has no logical reification. The hook key has its own type code corresponding to nothing known outside the kernel. When a running domain X tries to invoke a domain Z with a start key, and Z is not available, a prepared hook key is generated in X designating the domain root of Z. We say that X is stalled. Z serves as a queue head for the domains stalled on Z and the backchain thru the hooks locates the members of that queue.
When a domain becomes available, it is strategic to find domains with a process in them that want to jump to the newly available domain. The stalled domains are conveniently chained together on the stall queue of hook keys. The stall queue is part of the backchain. Actually they are all at one of end of the backchain so it is quick to discover when there are no more stalled domains.
Only one stalled domain is moved to the CPU queue, for to move more would normally cause all but one of them to go back on the stall queue. On the other hand, it is just possible that the domain that we put on the CPU queue, has changed its mind. There are many ways this may happen but one possibility is that its meter has become empty. This is awkward but we handle this case by putting the newly available domain on the worry queue. This is done by creating a hook key in the available domain that designates a block of memory that is polymorphic with a node header. Normally this hook lasts only as long as it takes the recently stalled domain to reinvoke the newly available domain. When any domain begins to run its hook key is obliterated and the domain is thus removed from any queue that it is on, such as the worry queue. A few times per second the worrier considers each domain on the worry queue and puts its first stall queue member on the CPU queue. If this empties the stall queue the considered domain is removed from the worry queue. The worry queue is usually empty.
You may have guessed that the CPU queue is itself the backchain of the stall-queue-head, which is again polymorphic with a node header.
Indeed there are many queue heads in the kernel and we will list some below. With the exception of the worry queue, every member of any of these queues has a process and the checkpoint logic makes its list of processes by traversing these queues. The order of queue members is not preserved in a checkpoint. The queue heads are indeed all in an array making it easy for checkpoint logic to find them.
When a domain cannot proceed for lack of a page or node, the I/O is initiated, if necessary, and then the domain is placed on one of 2n I/O queues. This is an array of tunable size. A hash of the CDA (coded disk address) is used as an index into this array. The reason it may be unnecessary to initiate I/O is that such I/O has already been initiated perhaps by some other domain. We really don’t want two copies of the same page in RAM. When an I/O operation finishes for a page-in operation the members of the assigned I/O queue are all moved to the CPU queue.
Each device accessed via a key has its own device queue. It is probably really bad form for that queue to have two members, but it is not the kernel’s fault. A queue member is waiting for the device to finish something.
Each KWait kernel object has its own queue of waiters.
There is a queue of domains that have asked to await the next checkpoint.
There are a number of statically allocated resources in the kernel and a queue for each kind. This scheme is used only for kernel services that are prompt.
I am sure that I have forgotten a dozen others.