Introduction
Gate Jump Times
Nodes are kept on disk in page size blocks called "node pots". When a node is required it is determined whether the containing node pot is already in core. If so a few hundred instructions move the node to where it can play its role as a node.
Segment tables and page tables are reclaimed after long disuse.
As a direct consequence of the 370 architecture and the fact that we use 64K segments, the length of the segment table required by a domain is determined by the highest address used by a domain. The size in bytes of the required segment table is (largest virtual address)/2**14.
The current implementation of the kernel fails to distinguish the modification of one slot of a segment node from the modification of the entire node. We may change this to limit the damage to the translation tables to that memory described by the altered slot.
Many domains hold a node key to the top node of their memory tree. It is convenient to use slots of this node as auxiliary key storage. Each such use severs the relationship of the domain from its segment table. The page tables will most probably remain intact. The segment table must be rebuilt again, one entry per fault.
This effect can be severe if a domain accesses a page alternately via two addresses one of which provides read-only access and the other of which provides read-write access. It is more likely to strike when the page is shared by two domains with different authority to the page.
There are kernel tables with entries for each currently mapped obscurity. When these tables become full, old obscure segment and page table entries are sacrificed. There are counters to monitor such sacrifices and the size of these tables is variable by reassembling the kernel.
the domain is long unused,
another domain which is under a common superior nearly empty meter requires the cached resource,
or some superior meter is examined or modified with a node key or used as other than a meter.
The pass through of interrupts destined for .debugger adds 1 gate jump, compared to the case where there was no .ossim domain.
The creation of a .ossim domain keeper requires the creation of 2 supernodes (how much space each?) plus 4 nodes and 1 page for .ossim and 1 node for the meter of .osprogram.
Explain the ICM trick here!
If the storage is unlikely to be required for a while and the values in the storage are no longer required, then a release page operation will immediately free real storage and also save the cost of writting the pages and the allocation of a disk frame.
The first advantage may be had if we modify the kernel to notice that a page fault is caused by a MVCL that is destined to zero the page and marked the page as virtual zero and resumed the MVCL at the next page. The programmer would zero his large data areas just before using them and have the desired effect. This requires no changes to the external kernel specs.
We will discuss the 3380 here to be definite although we now support only the 3330.
To achieve many disk accesses per second, many independent disk head access assemblies must be active at once.
To move a page requires the attention of an assembly for 25 ms and the attention of a channel (and controller) for about 2 ms.
If we assume 12 assemblies {6 3380's} per channel and 16 channels we come to an I/O performance figure of 8000 page reads per second. This is merely the hardware limitations. This gives us (2.4*10**11 = .94*2**38 = 3.75*16**9) bytes. This is the capacity of a large 3850 {mass store}.
Assume that we have a three domains per head assembly. Each of these domains will execute the code that accesses the faulted page. This is 576 domains.
What is the nature of the memory tree of these domains? We assume first that the data is collected into a single segment.
We might allocate a domain to a subsection of the segment. The size of such a subsection would still be 2**29, much too large to map without windows or XA.
We might give the domain access to the entire segment and devote such a domain to a terminal session.
Our current kernel code places a limitation on itemspace size which means that the number of coreframes plus 19* the number of node frames < 2**16. We can thus reasonably expect about 2000 nodes in item space. Many more are quickly available in node pots.
These observations were typically made by a PL/I program calling the routine BINTIME that returns the TOD value. These values are subtracted to measure elapsed times. To compensate for CPU interrupts the minimum time is normally reported. When times exceed 200 ms there is the likelihood that time slicing has taken its toll.