It is often possible to allow distinct representations for the same abstract value. Such redundant formats typically require more complex algorithms, but may be substantially faster. Often the extra space is minimal. The ideas I mention here never exceed a factor of two in space cost. The same techniques apply to hardware designs where the advantage may be reduced latency rather than reduced cost.
The balanced binary tree is the most famous and prestigious redundant format that comes to mind. See neat demo. Adaptive formats are in this category. Caching schemes may be generally viewed as adding complexity to achieve performance. They add state that is hopefully hidden within some abstraction.
The main point of this note is redundant formats for numbers. I have heard rumors that the Illiac II used many of these schemes.
Modern computers and early big computers multiplied numbers at about 12 multiplier bits per cycle, or faster. The significant portion of the magic required here is to add many numbers per cycle without waiting for intermediate carry assimilations. The routine cook takes three binary numbers and returns two with the same sum and it requires very little depth of logic. (One of those two must be shifted left by one.) This is called carry save add. It is more efficient of hardware and logic levels than the extreme measures to reduce the time taken for carry assimilation required for isolated additions.
I use similar tricks in my modular multiply routine. I store 30 bits per 32 bit word and thus avoid several extra instructions per multiply that would be required to attend individually to carries generated by the 60 bit products.
I believe that some or most modern processors exploit the fact that they have separate register banks for floating point numbers to adopt redundant representations in those registers. The real registers have more bits than the form stored in memory which is usually that called out in the IEEE standard. This strategy avoids the extra time to allow for denormal inputs to the arithmetic operations and other exceptions to the normal flow.
With redundant representation we are not blocked. We can home in on the value of x2 without ever committing ourselves to whether x2 is more than two. With redundant representation we are allowed to express a value to be defined, as the sum of two values, to any of whose bits we can commit in bounded time. In this scheme a number is represented as ∑tj2−j with an effective procedure for computing any tj which may take on any value in {−1, 0, 1}. Two procedures providing different such values may represent the same real. Indeed this is even the case in ordinary binary representation for 1.000 ..and 0.111 ... both denote 1. Given such procedures for two numbers it is possible to provide a procedure for their sum. Knowing the first n+1 t’s for each of two numbers lets one calculate the first n t’s of their sum. This is the carry save add trick. It avoids infinite latency by the same trick that hardware carry save adders avoid large but finite latency!
Dedekind defined a real as the set of rationals less than the real. This set is unique (not redundant), but deciding whether an element is in the set set for a sum of two such numbers, is not effective. If such sets were enumerated, however, an enumeration for the sum was possible.