McCrone makes many pregnant statements that provoke me to comment—thus the length of this note. I often disagree but I seldom find the claims irrelevant.
He reviews the roots of the early 20th century avoidance of consciousness in brain science. He claims that brain scans resurrected the subject and perhaps he is right. I agree with McCrone that to know where some mental activity occurs is not to understand it, and may indeed not contribute much to understanding. It may, however, serve to impugn some theories. While we agree that location of a faculty does not constitute understanding, we are far apart on what does.
McCrone seems skeptical of reductionism. I gather that he thinks that reductionists feel that they are finished when they have described the stuff at the bottom. Certainly this is insufficient for either the brain scientist or the computer scientist, or physicist, for that matter. I have always presumed that the reductionist merely requires, or aspires to understand various abstractions and how they are constructed on simpler abstractions, down to ‘the bottom’. (See my note.)
I find it necessary to doubt some of the premisses that emerge at the end of page 12. (I intend this as a complement to the book for identifying primitive notions.) I suspect that categories are evolutionarily prior to symbols. Animals have categories but symbols are necessary only for communication and are thus concomitant with language. Perhaps symbols are useful in some sorts of consciousness. Indeed it may be wrong to characterize computers as being always symbolic. I think that a computer programmed to be an autopilot is not symbolic. Perhaps this is only quibbling with words. Early computers dealt only with digits, not letters.
“computer technology had been born out of an idealization of human thought patterns”. A wonderful claim. Perhaps by idealization he means those parts of our thought that we have evolved to be aware of. I think that would be about right.
McCrone compares computers and brains in many ways and places. It is seldom clear whether by ‘computer’ he means computers with programs or without. Many of his characterizations would be correct if he specified ‘computers as typically programmed’. Some characterizations seem to deny any program whatsoever. This of course broaches the question of whether his brains have ‘programs’ in any of several possible senses. He seems to select his analogies to show the differences rather than the suggestive similarities. All of these questions bear an what is meant by reductionism.
On page 53 McCrone provides a very good description of the chemistry of neurons—better than I have seen in other popular works. I cannot vouch for the accuracy but I can for the clarity and relevance. Still there are topological issues I have never heard posed anywhere: Is the axon unidirectional? If so how does chemistry achieve this.
The book is good because it stimulates many questions. He describes the complex neuronal chemistry and hints that that complexity may be germane to the brain’s higher functions. He is careful so far to avoid claiming this. Clearly there are some blood hormones that effect brain function adaptively. Whether there are a few more than we know, or thousands more will bear on the issue of building AIs that function as we do. I optimistically suspect that there are only a few.
Whether nature has been able to exploit the chemical complexity found in neurons is a subtext so far in the book. Unidirectional axons might require complex chemistry. Computer engineers solve these problems in other complex ways. I speculate that like the lens of the eye, boolean logic is the only good design and that nature can at best approximate this. I grant that this is wishful thinking. It seems clear that the brain is not what computer engineers would call ‘clocked’. Neither are some computers. But even unclocked computers are not a good model of the brain at the next higher level of abstraction.
On page 54 McCrone suggests of neurons that “They appear to thrive on being fluid.”. I can imagine no advantage of innate variability over a crude random number generator. Indeed I see few advantages whatsoever. There are computer strategies sometimes called ‘annealing’ where something like variability is used to get unstuck from local maxima. In computers random number generators provide this variability for annealing, Monte Carlo and a few other computer strategies. I don’t suggest that the brain uses random number generators, merely that such generators completely overcomes any advantage to the brain’s inherent non-determinism.
This article seems apropos where they speak of a spectrum of how ‘focused’ the brain is. Both ends of the spectrum are useful. If carrying out a complex detailed plan one needs focus and to be insulated from distraction. Other times require free association ‘wild ideas’ which would interfere with following a detailed plan. This in turn fits with Kanerva’s ideas. McCrone gets to such issues on page 175.
Page 58: “Being inherently predictable, a computer can only pretend to be basing its calculations on unpredictable or continuously varying processes.” Arrgh. I don’t know what ‘pretend’ means here. Computers and brains do nothing but pretend! A simulation of a storm is not a storm but a simulation of a computation of π actually computes π. In the latter case the simulation is the same category as the thing simulated. Brains and computers are of this latter category. See this. This is not to deny that the evolution of the brain has not retained some of the analog nature of its origins; it certainly has and in some cases may have found a better solution than any purely digital version. Navigating to Mars is a strictly analog problem which digital computers do just fine! I would challenge any natural or man made analog instrument to do as well.
I was around kibitzing with a few who were doing early weather codes. Edward Teller made the comment no later than 1957 that either weather was chaotic, in which case it was controllable, or not chaotic in which case it was predictable. Not many knew of Poincare’s speculations; Certainly I didn’t. Poincaré had begun to suspect that even the planets over exceeding long periods of time might have chaotic orbits. No one who knew of Poincare’s earlier speculations would have been surprised that some analytic processes were so sensitive to initial inputs that predictions would be good for only limited time intervals; it was a bit of a shock, however, to hear crisp definitions and theorems to that effect. Chaos had been lore with no name and little expectation of becoming a discipline.
I think that the question that McCrone is trying to ask, or should be asking, is whether differences between neurons and computer gates allows brains to do things that computers can’t do. By “do” here I mean actually finish performing soon enough to be useful. On this criterion notions delineated by the universal Turing machines are irrelevant!
I am not impressed with the significance of chaos theory on understanding the brain.
I am not sure that McCrone is either; he gave it a good try.
I can’t imagine that people had any idea of useful of brain determinism that was dislodged by chaos theory.
Even clock-work mechanisms required to respond to variable unforeseeable input cannot be said to be predictable in practice.
I see that nothing was lost upon the legitimate insights of chaos.
(I think that xt+1 = SHA(xt) is a deterministic chaotic function of t.)
There he goes again on page 106. He claims that computers are not plastic because, I suppose, the factory wiring does not change after manufacture. On the other hand when most C programs call malloc they are dynamically allocating real hardware (in RAM) to some function. RAM is fungible, even more so than brain cells! In this case his computer analogs have no programs. He was careful to note the inappropriate use of anesthetized animals in understanding brains. Give the computers ordinary programs and they are more nearly like the brain. Computer designers occasionally consider incorporating fragments of program behavior into hardware. Such changes hardly modify what it is to be a computer. Would McCrone and friends think a computer with hardwired malloc to be more brain like? I hope not. Perhaps even more compelling is to note that an ordinary electronic gate routes signals to different destinations, depending on other signals. Is this not plasticity in every sense?
Page 131: on delayed consciousness: “We only feel we are there as events happen.” How else could we possibly feel? That’s how you feel as you watch a 20 year old movie. I think I am agreeing with McCrone.
On page 147 McCrone tries to describe the conundrum of when neural things happen. I think some recent computer engineering might illuminate here. Most modern machines provide the ‘illusion’ to the program that one instruction finishes before the next starts. This has been only an illusion in some machines since 1960. By illusion here I mean that the semantics of the hardware are as if execution were indeed serial; the theory that relates a program to the hardware is in terms of this illusion. The real activities within the machine carry extra information, a ‘timestamp’ if you will, relating them to the other real activities. If such an illusion in the brain were adaptive I have no doubt that evolution could provide it.
On page 148 McCrone invokes the programmer for the first time. He correctly notes that planning ahead is unnatural. There was a decade where it was highly strategic and much practiced for production programs. He seems to want to say that needing a programmer to arrange this makes it ‘unnatural’ for the computer. What is the analog to the programmer for the brain? The whole analogical exercise is to compare an evolved instrument with an engineered one. Need we exclude the programmer in this case? A conundrum. It may be that the advantage of ‘overlapped execution’ in both the brain and computer are highly analogous—it ameliorates slow circuitry.
Page 153: “The computer model suggests that the brain is an inert lump of circuits awaiting input.” Where did that come from. I am astounded of the notion of computer that he has, or ascribes to neurologists.
Talk of an unstimulated brain, as in sensory deprivation, needs to be squared with realizations that occur that are unrelated to current stimuli.
I am unhappy with this chapter. Here he first tries to bridge that gap between neuroscience and psychology. It seems more like poetry in trying to evoke a feeling of what it is like to be conscious using vague words with occasional anatomical terms thrown in. The poetry does not do it for me. In this section there is little mention of evidence for such associations. Perhaps previous chapters are meant to provide such evidence. Such a bridge would be satisfying but I suspect that it is yet impossible. Perhaps intermediary concepts would help, but I am skeptical. This and most other writing on the brain use anatomical terms seemingly assuming that the reader has an image of how these area relate geometrically. It might help to have a brain atlas that conveys proximity and nerve pathways. The many 2D drawings of the brain that I have studied, have not left me with an adequate mental 3D image of the brain. My lack of such a mental atlas may account for my lack of a feeling of coherence in the anatomical detail given in the book.
There comes a time to throw up your hands and declare an illusion. Illusions, indeed hallucinations, are real and require an explanation in distinction to trying to explain some sense in which the hallucination is true. Until you decide that an illusion is an hallucination it is a delusion even if it is the illusion of consciousness.
Page 206: McCrone speaks of forming long term memories. This account rings true, not thru short term subjective experience but thru contemplation of my own long term memories formed shortly after learning of some momentous event. Speaking for my self I note that these memories are images captured in the minutes following a revelation, not the seconds following. The formation may only follow the realization of the significance of the event.
This chapter seems highly familiar subjectively and the references to thalamus plausible. It is not necessary for a correct brain theory to feel right subjectively, but it lends credibility. The description of interruptions sounds very much like the computer facility by the same name. The computer has been carefully designed (at least since about 1960) so as to able to move the state of a computation to ‘long term storage’ in a way that the computation can be reliably resumed. It appears that the brain is unable to do this but may be able to switch consciousness to other concerns temporarily. Resuming the interrupted activity is a hit or miss thing. From the perspective of a computer kernel designer these issues make the brain sound like a multiprogrammed computer.
Page 261: “It was also becoming agreed that to find its way to the best view of the moment, the brain had to be able to handle both plans and interruptions.” Upon interruption some part of the brain must be reallocated from following the plan to whatever reaction the interrupt required. Just what part is this? It is the part capable of carrying out plans. A plan is mightily like a program. This sounds so much like a conventional computer both regarding plans which are highly analogous to repetitively executing instructions, and to interrupting where such instruction instruction sequences are terminated. The computer is special here because it does a better job of resuming the plan!
I think that how the brains handles the hypothetical is related to the subject of this chapter and that it is an important brain faculty. It is surely somewhat like the forking problem and probably shares some brain mechanism.
The paragraph beginning at the end of page 271 describes the dilemma which is like that of running weather prediction code. Such code must run several times as fast as real time to be of any predictive value. Because errors in initial data and incomplete coverage, as well as chaotic behavior of the equations, the simulation will depart from the observed weather. What should be done with new weather reports that arrive while simulations is in progress?
On page 293: “A computer memory is made up of discrete bits that can be picked up and shuffled about, but in the brain the memory is embedded.”
McCrone compares the bottom level machine semantics with near top level brain semantics.
No wonder he misses the analogies.
Indeed in what precise sense are the memories of the brain embedded?
His psychological notions of language seem good but how they come about seem entirely muddled.
On same page: “ - it [an animal] has no mechanism to fetch and replay arbitrary chunks of data.”
See this note about a rat learning to navigate a maze which is just such a series.
Here is my interpretation.
In the next paragraph: “Words, however, allow us to treat our brains as digital warehouses. We cannot shift the data — that always has to stay in place — but we can use words to trick the brain into making a shift in its point of view, to open up an angle into an area of experience.” I think that this is accurate. I think that linkage by words is a secondary such mechanism evolved perhaps along with consciousness and language. When I hum a tune I do not use words to recall what comes next. ‘I just know’ by some unconscious link from the current musical phrase to the next. As I walk home my turns are triggered by clues which I have no name for and could not tell to a friend. This loser linkage is clearly more primitive and probably more efficient. They are also entirely private where words are not. Words connect cultural memes together in a somewhat shared structure.
I agree with McCrone that language is old and has co-evolved with many of our other unique faculties.
I am disappointed that McCrone missed the opportunity to explore grammar further. There is more to grammar than subject-verb-object order. Chomksy’s insight was that languages have a complex grammar which is consistent at some level across languages. Surely future and past tense are related to our escape from the present. Our subjunctive form speaks of what might be, now, in the past or perhaps the future. Counterfactuals are the root of planning to collectively change the world. How did they evolve?
I now suppose that McCrone’s notion of computer is approximately the notion built by the occasional computer user of the 1950s thru 1980s as commonly programmed then. This is reasonable for the book is about neurological workers who mostly were computer users in those days. I had hoped that he was concerned with the question “is 21th century digital circuitry, as done in Silicon valley today, suitable as a substrate for brain stuff?”. My occasionally hostile tone in this long note is perhaps due to this misunderstanding. I am aware of philosophical dispute on whether boolean logic, which underpins modern computer theory, is suitable for brain like behavior. This question is not McCrone’s focus.
This article mocks the ‘science’ of synapse mechanics and locating brain function as a means to understanding the brain. It favors a more ‘psychological’ style of theory. I largely agree. McCrone is midway between synapse stance and the psychological stance and tries to bridge them. I think he is closing in but far from done.
Ultimately I am still a reductionist but probably not in McCrone’s sense. It is clear that the brain challenges reductionistic methodologies.
I think that McCrone abandons my sense of ‘understanding the brain’ which I suppose he would consider ‘reductionist’. I don’t feel that I understand an information process until I can express it as a computer program. I have heard claims that duplicating the brain with code is impossible but McCrone claims no such thing; but he does seem to advocate turning away from that endeavor. Ramachandran, by contrast, tries to understand the brain in a way that seems to me to lead towards implementing it digitally.
Despite all of this I think that the book has material that bears on understanding the brain, even in my sense. McCrone provides evidence that is new to me and that reductionists must consider.
Making a brain is not a project that you can run thru today’s industrial software assembly line. Making a brain may well require somewhat more hardware than we currently assemble, but not not inconceivably more. I suspect that current hardware directions will suffice, but perhaps not. I see no evidence that clocked boolean logic is insufficient.