Time slicing

The first scheme for kernel scheduling was to do the following each 100ms:
If there is a domain running Sometimes this policy led to the following poor situation: The result of this is that the domain doing I/O would have to wait for about one second for each I/O, as it progressed thru the ready queue, leaving the expensive I/O channels largely idle, despite the substantial I/O needs of a domain that needed little CPU.

I don’t know who first reported this situation; I think that the observation is many years old. Here is a simple solution:

Establish two ready queues in the kernel, X and Y. X has higher priority than Y so that members of Y are selected only when X is empty. When I/O finishes and that unblocks a domain, put that domain on the X queue. After 2 ms of execution, a domain from queue X is moved the end of the Y queue. At that time if the some CPU is running a domain from the Y queue, usurp that CPU (putting its domain on the Y queue) so that the X queue will be served promptly.

In the above scenario the domain needing I/O runs at least 10 times as fast while impacting the other domains very little. The expensive I/O system and its client are efficiently served. Total thruput is increased.

The other kernel scheduling mechanism is meters which provide more complex scheduling policies. Other kernel hooks may be necessary for activities that require special allocation policies for the hardware that the kernel allocates.