I am reading “What Intelligence Tests Miss” by Keith Stanovich. I got up to page 24 before I found something important. There he draws attention to the familiar and remarkable activity of the brain in simulating possible worlds while not confusing the simulations with the perceived state of the real world. I had been thinking about this ability for several years and what I find most remarkable is that this ability seems to have been remarked upon only recently. I think that the ‘discovery’ of this ability is a byproduct of trying to explain the workings of the brain from a functional perspective, and also from an evolutionary perspective. I suppose that thinking about possible worlds seemed previously so natural as to escape notice. I began to think about this only in conjunction with what it takes to create software that does this. As Stanovich notes, children are adept at avoiding this confusion at an early age. It is, I am sure, integral to play activity, which is integral to their assimilating information about the world. I wonder if there are historic precursors to this observation.
On page 30 Stanovich comes perilously close to positing a Cartesian dual framework in his distinction between ‘algorithmic’ and ‘reflective’ minds. It seems to me that what he calls reflective is indeed algorithmic but where we want to understand the algorithm. Stanovich speaks as if algorithms cannot have goals. All chess programs have goals and a good many other programs as well. Sure you can treat a chess program as merely obeying the CPU semantics, but so can you treat ‘reflective thinking’ as mere neural activity. Perhaps this is merely a harmless level confusion.
The last paragraph to begin on page 31 is quite interesting. There he enumerates several brain activities that we can think about, but form an unconscious component of conscious thought processes. They are all shades of salience. By this I mean that these activities are to some degree especially wired-in and necessary for, but not part of ‘logical thought’. We may be unable to learn them. I suspect that AI will have to make the same compromises and that this was not foreseen in the early hard AI programs.
The distinction between the reflective mind and the ‘algorithmic mind’ is very nearly what has been distinguished as ‘wisdom’ vs. knowledge. I object to the terminology, but not the concept.
Popper’s & Bartley’s PCR (Pancritical Rationalism) bears on this. A collection of beliefs cannot rest simply on logic. Whether the methodology of adjusting these beliefs can be entirely logical is sometimes debated.
It is very difficult to find useful meaning in the following quotation from page 36:
For example, researchers have studied situations where people display a particular type of irrational judgement—they are overly influenced by vivid but unrepresentative personal and testimonial evidence and are under-influenced by more representative and diagnostic statistical evidence. We have studied a variety of such situations in my own laboratory and consistently found that dispositions toward actively open-minded thinking are consistently associated with reliance on the statistical evidence rather than the testimonial evidence. Furthermore, this association remains even after intelligence has been statistically controlled for. Similar results have been obtained for a variety of other rational thinking tendencies that we have studied.First I must agree that the subject matter is important, if obscure. I seize up on such undefined terms as “open-minded thinking” which seem to be at the crux of the statement. If the tested individual grew up in an environment where statistics were regularly falsified, and there are such environments, then personal testimony may be the better source. A good friend one remarked to me “A small unbiased sample is better than a large biased sample.”. To understand these results I need to know how credibly the statistics were presented. I would be pleased to see how these issues could be dealt with in the lab. I am convinced, however, that these are indeed important aspects of ‘rationality’.
The world model is an important notion. In more detail it is not a whole model but a world representation that is a short list propositions about the world that may be contradictory to the organism’s belief system. This is a familiar computer science notion and I assume it is so for I can imagine no other efficient mechanism. It may or may not be the phylogeny of propositions in the brain. Some of these lists serve as lingering hypotheses. Others serve as brief stages in reasoning. They are the stuff of PCR. They serve to depict and instantiate goals.
On the conclusion of the chapter “Cutting intelligence down to size” I must note that I would have given “intelligence” a broader scope than “rationality”. I respect Stanovich’s survey however and agree with his proposed separation. I find “dysrationalia” non euphonic however.