There are these important quantitative figures of merit for a disk system:
Raid may be configured to provide some combination of improvements in reliability and bandwidth at the expense of some combination of capacity and accesses per unit time.
It is hard to find applications these days that cannot keep all of their data in RAM; there are a few. This note is intended to explain some of the choices we made when we wrote the code in 1976.
The Keykos disks could and almost always did store all pages on two disks so as to be able to survive, without stopping, total loss of some disk. It could also recover without stopping, when the failed disk was repaired or replaced. This cost us 2 times write performance but gained a smaller fraction in read performance. Our work was heavily biased towards reads and we probably gained overall. (The writes were low priority which fact we also exploited.)
One configuration that we considered and rejected was a Hamming like code where one failed disk could be ignored just Hamming codes can protect against a dead memory chip. This would require choosing a "bits per word" parameter where the "bits" were page fragments. A page write would first divide the page into pagelets, compute the redundant Hamming paglets and write all of these nearly simultaneously. This cuts down on the accesses per unit time by the parameter. It also requires the disks to be formatted devoting very much more of the space to "record gaps". This scheme damaged us greatly on two dimensions. For the 1976 numbers the optimal Hamming parameter was 2, which is equivalent to what we did.