Many of us remember learning about numbers in school or before. We learned names, “one”, “two”, ... “nine”, “ten” and another sort of name, 1, 2, ... 9, 10 (called ‘numerals’ here). The rules for making new numerals were simpler. Maybe it seemed that English names were necessary for a number to be real. It seemed always easy to make another numeral, bigger and different from all those that came before. If 999999999...999 was as big as you could fit on your piece of paper, you could get a bigger piece of paper, or certainly someone could get a bigger piece. It thus seemed inappropriate to proclaim the end of numbers for lack of paper.

At this point we had bought into the infinity concept. The Greeks bought this and wrote about it. It is not as if we had a proof that there was no end to the numerals, and thus the numbers, but that we could think of no impediment to going on, especially because such big numbers could be written in such small space.

When it first occurred to someone (Frege?) to make axioms for numbers, he had no trouble. His induction axiom, ensuring enough numbers, did require quantifying over sets which he took as a primitive concept.

It seems clear to me that we accept such an axiom because of our feeling that we could always write down a numeral one bigger than any given numeral—we jump to a conclusion!

To jump ahead we believe Gödel’s incompleteness theorem because we can ‘see how the self-referential proposition can be read as stating the impossibility of its own proof’. Gödel didn’t pull a fast one; he used familiar mathematical proof methods. We see a pattern which seems to lack a limit, and accept it as unlimited.

When Penrose says that a computer cannot see this he presumes, I suppose, that the computer is limited to reasoning by some fixed set of logical principles, such as from an axiom system. People don’t and neither do pattern recognizing computers need to limit them-selves. People and such computers will both make mistakes.


Similar note