I turn on my mental tape recorder here in the form of this page.
I wonder about the quality necessary for a random bit source necessary for various crypto purposes.
I share a widespread notion of a perfectly random bit stream.
What is in question is the suitability of various ways to provide some approximation to such a stream.
For purposes of a one-time pad suppose that we have a bit stream that actually has 51% 1’s and we treat that is if it were perfect.
Suppose further that the cryptanalyst knows this statistical property.
That knowledge would very slightly improve his algorithms for finding ‘depths’ where one key stream had been used twice, that then it is not a “one-time pad”.
If the format of the plain text is known and the analyst is interested in some particular bit, then he gains about 0.02 bits of information about this bit from the cipher-text.
If this bit, or its complement is repeated many times in the text, or in the text of other transmissions, then the problem becomes very real.
If the cryptanalyst suspects that the plaintext is one of a small number of known plain-texts, then he can probably guess which.

These considerations leads one to spend some effort at guarding against some set of poor random number generators.

### Other Randomness Failures

If a random source of 64 bit numbers excluded ¾ of the possible values, and this exclusion set were itself sensibly random, it would be expensive to discover this by examining the numbers.
After reading n such values, sorting them to discover repeats, and counting the repeats, the count would probably be higher than the expected n^{2}2^{−65} collisions.
If one collects 2^{32} samples, and sorts them, ½ collisions would be expected without the ¾ exclusion bias—8 collisions with the bias.
This would not convict but would suggest further testing.
A ¾ exclusion set known to the cryptanalyst, gains him only a factor of 4 in case of brute force attacks.
There is no end to other sorts of biases.