John McCrone’s “Going Inside”

There is no introduction or preface to this book, at least by name. Such frequently bore me and so this is perhaps strategic. In the first section the author identifies himself as a scientific reporter. As such he has a degree of freedom to speculate that official scientists often feel they lack.

McCrone makes many pregnant statements that provoke me to comment—thus the length of this note. I often disagree but I seldom find the claims irrelevant.

He reviews the roots of the early 20th century avoidance of consciousness in brain science. He claims that brain scans resurrected the subject and perhaps he is right. I agree with McCrone that to know where some mental activity occurs is not to understand it, and may indeed not contribute much to understanding. It may, however, serve to falsify some theories. While we agree that location of a faculty does not constitute understanding, we are far apart on what does.

McCrone seems skeptical of reductionism. I gather that he thinks that reductionists feel that they are finished when they have described the stuff at the bottom. Certainly this is insufficient for either the brain scientist or the computer scientist, or physicist, for that matter. I have always presumed that the reductionist merely requires, or aspires to understand various abstractions and how they are constructed on simpler abstractions, down to ‘the bottom’. (See my note.)

Terminology

McCrone uses some words which I find murky. “bottleneck” is a pejorative term in computer design describing some feature of the hardware that limits the capacity of the system. McCrone’s bottlenecks seem somewhat like what the computer scientist would call an abstraction which is a good thing and which amplifies the sorts of things that a system can do and also aids understanding that system. I still have little feel for what he means by “dynamic”. “Map” and “mapping” are often used in an unfamiliar way.

Running Commentary

On page 11 McCrone has Friston say that the computer model of the brain has it that states of consciousness are each the result of some stimulus—as if the computer had no previous state that bore on the new state. Computers seldom work like that. The computer’s state evolves with successive stimuli just as the brain’s state. This evolving state is largely memory which is a subject too often ignored by brain theories.

I find it necessary to doubt some of the premisses that emerge at the end of page 12. (I intend this as a complement to the book for identifying primitive notions.) I suspect that categories are evolutionarily prior to symbols. Animals have categories but symbols are necessary only for communication and are thus concomitant with language. Perhaps symbols are useful in some sorts of consciousness. Indeed it may be wrong to characterize computers as being always symbolic. I think that a computer programmed to be an autopilot is not symbolic. Perhaps this is only quibbling with words. Early computers dealt only with digits, not letters.

“computer technology had been born out of an idealization of human thought patterns”. A wonderful claim. Perhaps by idealization he means those parts of our thought that we have evolved to be aware of. I think that would be about right.

Brain—Computer contrast

On page 50 he begins to contrast computers and brains. He makes distinctions that seem irrelevant to me and omits others that I think are vital.
Determinacy:
I know no disadvantage of a deterministic computer with a pseudo random number generator, over an indeterminate digital system. I see no disadvantage of such a computer in making a living and surviving in some universe.
Turing Completeness
has to do with what you can compute in unbounded space and time. That is neither necessary nor sufficient for what either brains or real computers can do.
Analog vs. Digital
First there are at least two sort of distinction to be made here, continuous time, and continuous signal levels. The observed rhythms of the brain are probably not clocks such as computer designers define. It seems as if the brain is unclocked. Some engineers build computers without clocks and aspire for them to behave as clocked system, only faster. Signal levels in almost all computers are indeed 0 or 1 at the bottom. The next abstraction level is very often a binary with at least several bits. Computers do indeed deal in gray.
More than once I have heard of someone reporting analog behavior that transcends digitally behavior. Each time it turns out that the ‘analog system’ was actually simulated on an ordinary digital computer.

McCrone compares computers and brains in many ways and places. It is seldom clear whether by ‘computer’ he means computers with programs or without. Many of his characterizations would be correct if he specified ‘computers as typically programmed’. Some characterizations seem to deny any program whatsoever. This of course broaches the question of whether his brains have ‘programs’ in any of several possible senses. He seems to select his analogies to show the differences rather than the suggestive similarities. All of these questions bear an what is meant by reductionism.

On page 53 McCrone is providing a very good description of the chemistry of neurons—better than I have seen in other popular works. I cannot vouch for the accuracy but I can for the clarity and relevance. Still there are topological issues I have never heard posed anywhere: Is the axon unidirectional? If so how does chemistry achieve this.

The book is good because it stimulates many questions. He describes the complex neuronal chemistry and hints that that complexity may be germane to the brain’s higher functions. He is careful so far to avoid claiming this. Clearly there are some blood hormones that effect brain function adaptively. Whether there are a few more than we know, or thousands more will bear on the issue of building AIs that function as we do. I optimistically suspect that there are only a few.

Whether nature has been able to exploit the chemical complexity found in neurons is a subtext so far in the book. Unidirectional axons might require complex chemistry. Computer engineers solve these problems in other complex ways. I speculate that like the lens of the eye, boolean logic is the only good design and that nature can at best approximate this. I grant that this is wishful thinking. It seems clear that the brain is not what computer engineers would call ‘clocked’. Neither are some computers. But even unclocked computers are not a good model of the brain at the next higher level of abstraction.

On page 54 McCrone suggests of neurons that “They appear to thrive on being fluid.”. I can imagine no advantage of innate variability over a crude random number generator. Indeed I see few advantages whatsoever. There are computer strategies sometimes called ‘annealing’ where something like variability is used to get unstuck from local maxima. In computers random number generators provide this variability for annealing, Monte Carlo and a few other computer strategies. I don’t suggest that the brain uses random number generators, merely that such generators completely overcomes any advantage to the brain’s inherent non-determinism.

This article seems apropos where they speak of a spectrum of how ‘focused’ the brain is. Both ends of the spectrum are useful. If carrying out a complex detailed plan one needs focus and to be insulated from distraction. Other times require free association ‘wild ideas’ which would interfere with following a detailed plan. This in turn fits with Kanerva’s ideas. McCrone gets to such issues on page 175.

Page 58: “Being inherently predictable, a computer can only pretend to be basing its calculations on unpredictable or continuously varying processes.” Arrgh. I don’t know what ‘pretend’ means here. Computers and brains do nothing but pretend! A simulation of a storm is not a storm but a simulation of a computation of π actually computes π. In the latter case the simulation is the same category as the thing simulated. Brains and computers are of this latter category. See this. This is not to deny that the evolution of the brain has not retained some of the analog nature of its origins; it certainly has and in some cases may have found a better solution than any purely digital version. Navigating to Mars was a strictly analog problem which digital computers did just fine! I would challenge any natural or man made analog instrument to do as well.

Ugly Questions about Chaos

McCrone’s history of Chaos is interesting and accurate as far as I know. Poincaré discovered analytic systems in the late 19th century and deduced some of their properties that would characterize them as chaotic by modern theory. McCrone, and many others, speak as if Chaos theory were an empirical result from the laboratory. Instead it is a rigorous mathematical result based on the form of physical laws that physics has used at least since Newton. The mathematical game of differential equations was logically found to lead to chaos in circumstances that included most of the equations that physicists had found to describe the world.

I was around kibitzing with a few who were doing early weather codes. Edward Teller made the comment no later than 1957 that either weather was chaotic, in which case it was controllable, or not chaotic in which case it was predictable. Not many knew of Poincare’s speculations; Certainly I didn’t. Poincaré had begun to suspect that even the planets over exceeding long periods of time might have chaotic orbits. No one who knew of Poincare’s earlier speculations would have been surprised that some analytic processes were so sensitive to initial inputs that predictions would be good for only limited time intervals; it was a bit of a shock, however, to hear crisp definitions and theorems to that effect. Chaos had been lore with no name and little expectation of becoming a discipline.

I think that the question that McCrone is trying to ask, or should be asking, is whether differences between neurons and computer gates allows brains to do things that computers can’t do. By “do” here I mean actually finish performing soon enough to be useful. On this criterion notions delineated by the universal Turing machines are irrelevant!

I am not impressed with the significance of chaos theory on understanding the brain. I am not sure that McCrone is either; he gave it a good try. I can’t imagine that people had any idea of useful of brain determinism that was dislodged by chaos theory. Even clock-work mechanisms required to respond to variable unforeseeable input cannot be said to be predictable in practice. I see that nothing was lost upon the legitimate insights of chaos.
(I think that xt+1 = SHA(xt) is a deterministic chaotic function of t.)

A Dynamic Computation

In marveling over the brain’s plasticity McCrone seems to miss a vital insight, or at least conjecture—morphogenesis and plasticity both require solving the same problem of allocating nerves to function. This is a very hard problem and warrants respect. Supposing that evolution solved it twice seems unnecessary. The notion that the DNA provides a complete detailed map violates both information theory and the cannons of any detailed morphogenetic mechanisms. Thinking about the wiring of the optic nerve is instructive. A continuous map from the retina to the cortex is practically necessary since edge detection works only when nearby cortex cells process nearby retina data. Just about any mechanisms that gets that map right should easily provide plasticity. (He backs into these ideas on page 187.)

There he goes again on page 106. He claims that computers are not plastic because, I suppose, the factory wiring does not change after manufacture. On the other hand when most C programs call malloc they are dynamically allocating real hardware (in RAM) to some function. RAM is fungible, even more so than brain cells! In this case his computer analogs have no programs. He was careful to note the inappropriate use of anesthetized animals in understanding brains. Give the computers ordinary programs and they are more nearly like the brain. Computer designers occasionally consider incorporating fragments of program behavior into hardware. Such changes hardly modify what it is to be a computer. Would McCrone and friends think a computer with hardwired malloc to be more brain like? I hope not. Perhaps even more compelling is to note that an ordinary electronic gate routes signals to different destinations, depending on other signals. Is this not plasticity in every sense?

Page 131: on delayed consciousness: “We only feel we are there as events happen.” How else could we possibly feel? That’s how you feel as you watch a 20 year old movie. I think I am agreeing with McCrone.

On page 147 McCrone tries to describe the conundrum of when neural things happen. I think some recent computer engineering might illuminate here. Most modern machines provide the ‘illusion’ to the program that one instruction finishes before the next starts. This has been only an illusion in some machines since 1960. By illusion here I mean that the semantics of the hardware are as if execution were indeed serial; the theory that relates a program to the hardware is in terms of this illusion. The real activities within the machine carry extra information, a ‘timestamp’ if you will, relating them to the other real activities. If such an illusion in the brain were adaptive I have no doubt that evolution could provide it.

On page 148 McCrone invokes the programmer for the first time. He correctly notes that planning ahead is unnatural. There was a decade where it was highly strategic and much practiced for production programs. He seems to want to say that needing a programmer to arrange this makes it ‘unnatural’ for the computer. What is the analog to the programmer for the brain? The whole analogical exercise is to compare an evolved instrument with an engineered one. Need we exclude the programmer in this case? A conundrum. It may be that the advantage of ‘overlapped execution’ in both the brain and computer are highly analogous—it ameliorates slow circuitry.

Page 153: “The computer model suggests that the brain is an inert lump of circuits awaiting input.” Where did that come from. I am astounded of the notion of computer that he has, or ascribes to neurologists.

A Moment of Anticipation

Page 156: What McCrone describes as ‘Anticipation’ is referred to sometimes as ‘speculative execution’ in hardware or software. Recent emphasis on energy and heat has dampened moves towards speculative execution. The energy required to perform a computation decreases if you do it more slowly. Perhaps the brain does this as time and resources permit. Perhaps computers should too.

The Needs that Shape the Brain

Page 165; McCrone considers the suggestion that consciousness will be explained sometime with a Einstein like breakthrough and then “Suddenly we will all be able to see what turned inanimate matter into thinking, experiencing flesh.”. Einstein’s breakthrough didn’t enable very many to immediately see the new truth. It warranted its daunting reputation and it was a generation before many physicists laid claim to the insights it offered. Likewise I fear that the brain breakthrough may not immediately enlighten all of us either.

Talk of an unstimulated brain, as in sensory deprivation, needs to be squared with realizations that occur that are unrelated to current stimuli.

Consciousness’s Twin Peaks

I am unhappy with this chapter. Here he first tries to bridge that gap between neuroscience and psychology. It seems more like poetry in trying to evoke a feeling of what it is like to be conscious using vague words with occasional anatomical terms thrown in. The poetry does not do it for me. In this section there is little mention of evidence for such associations. Perhaps previous chapters are meant to provide such evidence. Such a bridge would be satisfying but I suspect that it is yet impossible. Perhaps intermediary concepts would help, but I am skeptical. This and most other writing on the brain use anatomical terms seemingly assuming that the reader has an image of how these area relate geometrically. It might help to have a brain atlas that conveys proximity and nerve pathways. The many 2D drawings of the brain that I have studied, have not left me with an adequate mental 3D image of the brain. My lack of such a mental atlas may account for my lack of a feeling of coherence in the anatomical detail given in the book.

There comes a time to throw up your hands and declare an illusion. Illusions, indeed hallucinations, are real and require an explanation in distinction to trying to explain some sense in which the hallucination is true. Until you decide that an illusion is an hallucination it is a delusion even if it is the illusion of consciousness.

Page 206: McCrone speaks of forming long term memories. This account rings true, not thru short term subjective experience but thru contemplation of my own long term memories formed shortly after learning of some momentous event. Speaking for my self I note that these memories are images captured in the minutes following a revelation, not the seconds following. The formation may only follow the realization of the significance of the event.

Of Sub-Cortical Bottlenecks

On page 232 McCrone suggests that noradrenaline is produced by a few thousand cells deep in the brainstem and is delivered on demand in a few hundred milliseconds thruout the brain via axons. This radically violates my hydrodynamics intuitions about stuff flowing thru micron size pipes. The inside of an axon must surely be protected from swift current lest its signal transmission be compromised.

The Brain’s Forking Pathway

This chapter seems highly familiar subjectively and the references to thalamus plausible. It is not necessary for a correct brain theory to feel right subjectively, but it lends credibility. The description of interruptions sounds very much like the computer facility by the same name. The computer has been carefully designed (at least since about 1960) so as to able to move the state of a computation to ‘long term storage’ in a way that the computation can be reliably resumed. It appears that the brain is unable to do this but may be able to switch consciousness to other concerns temporarily. Resuming the interrupted activity is a hit or miss thing. From the perspective of a computer kernel designer these issues make the brain sound like a multiprogrammed computer.

Page 261: “It was also becoming agreed that to find its way to the best view of the moment, the brain had to be able to handle both plans and interruptions.” Upon interruption some part of the brain must be reallocated from following the plan to whatever reaction the interrupt required. Just what part is this? It is the part capable of carrying out plans. A plan is mightily like a program. This sounds so much like a conventional computer both regarding plans which are highly analogous to repetitively executing instructions, and to interrupting where such instruction instruction sequences are terminated. The computer is special here because it does a better job of resuming the plan!

I think that how the brains handles the hypothetical is related to the subject of this chapter and that it is an important brain faculty. It is surely somewhat like the forking problem and probably shares some brain mechanism.

Getting It Backwards

Page 268: McCrone has Freeman say that the cognitive guys say that its just impossible to keep throwing everything you’ve got into the computation every time. It seems that McCrone has not done a good job of describing the paradigm whose overthrow he reports. The above ascribed quote does better perhaps in this direction. Hawkins brought to my attention the significance of anticipation here. Hawkins quoted evidence largely repeated in this book, that we are frequently conscious of the departures of experience from the anticipated. McCrone weaves this phenomenon into an adaptive model of what the brain does in a way that relates indirectly to consciousness. No description of the mechanisms that produce this anticipation is given. One simple mechanism stems from Kanerva’s ideas which easily reproduce sequences from experience, indeed easily produce contingent sequences. In the current proposed environment several concurrent sequences are needed for the various areas we need to anticipate. A hardware designer would allocate a timeshared Kanerva box and several individual states, each tailored to some realm of experience. An alternative design would be for each realm to have its own segregated memory; this would predict that we cannot pay attention to two melodies at once since we have only one melody module. This suggests fMRI experiments of people listening to music and a conversation at the same time, with instructions to note grammatical or musical errors. See Rats, Cheese and memories.

The paragraph beginning at the end of page 271 describes the dilemma which is like that of running weather prediction code. Such code must run several times as fast as real time to be of any predictive value. Because errors in initial data and incomplete coverage, as well as chaotic behavior of the equations, the simulation will depart from the observed weather. What should be done with new weather reports that arrive while simulations is in progress?

There were attempts to incorporate new weather observations into simulations that are already underway. Such plans seek to modify the conventional ‘Cauchy initial value’ formulation of partial differential equations as they are used to make predictions. I do not know if such is current practice. Weather prediction with incorporation of new data would be like McCrone’s ‘dynamic mechanisms’.
Chapter Summary
This chapter presents a vague but useful model. It is again novel (to me) and subjectively consilient. I presume that McCrone is correct in his relating it to famous empirical psychological results. It is vague in its predictions and thus a very incomplete theory. Some good theories start this way. I think that any good brain theory must include some of the notions of this chapter in some form.

Post Script

I add this paragraph a couple of months after the rest of these notes. I find that my subjective view of my own mental activities has been modified by the ideas from this book. I see such activities now more in terms of abstract patterns swarming in my head—patterns without names and combining sometimes to form new patterns. I was distressed at first by this demeaning description of my mental activities. Then I realized that such descriptions also applied to very productive periods as in writing difficult computer programs.

The Ape that Spoke

On page 292 McCrone seems to deprecate Chomsky’s contributions but as best I can tell what he describes in this chapter sounds entirely Chomskian to me. I am impressed neither by Chomsky’s nor McCrone’s evolutionary story but McCrone tries harder.

On page 293: “A computer memory is made up of discrete bits that can be picked up and shuffled about, but in the brain the memory is embedded.”
McCrone compares the bottom level machine semantics with near top level brain semantics. No wonder he misses the analogies. Indeed in what precise sense are the memories of the brain embedded? His psychological notions of language seem good but how they come about seem entirely muddled. On same page: “ - it [an animal] has no mechanism to fetch and replay arbitrary chunks of data.” See this note about a rat learning to navigate a maze which is just such a series. Here is my interpretation.

In the next paragraph: “Words, however, allow us to treat our brains as digital warehouses. We cannot shift the data — that always has to stay in place — but we can use words to trick the brain into making a shift in its point of view, to open up an angle into an area of experience.” I think that this is accurate. I think that linkage by words is a secondary such mechanism evolved perhaps along with consciousness and language. When I hum a tune I do not use words to recall what comes next. ‘I just know’ by some unconscious link from the current musical phrase to the next. As I walk home my turns are triggered by clues which I have no name for and could not tell to a friend. This loser linkage is clearly more primitive and probably more efficient. They are also entirely private where words are not. Words connect cultural memes together in a somewhat shared structure.

I agree with McCrone that language is old and has co-evolved with many of our other unique faculties.

I am disappointed that McCrone missed the opportunity to explore grammar further. There is more to grammar than subject-verb-object order. Chomksy’s insight was that languages have a complex grammar which is consistent at some level across languages. Surely future and past tense are related to our escape from the present. Our subjunctive form speaks of what might be, now, in the past or perhaps the future. Counterfactuals are the root of planning to collectively change the world. How did they evolve?

The Hard Question

Page 303: McCrone paraphrases Chalmers: “… to answer the hard question of why red feels red, and not blue.”. I would answer, it does indeed feel blue to some of us but those learned to call it “red”. The whole notion of ‘illusion’ assumes a phenomenon of physical sense stimuli which, via the senses, imparts knowledge to the mind that is false. This spans the yet mysterious bridge between the neuronal and psychological. Qualia, some of which are illusions, span the same bridge.

Summary

There is worthwhile new information in this book for me. McCrone tries to bridge the gap between the neurological and psychological views of the brain. For me he narrows the gap but does not bridge it.

I now suppose that McCrone’s notion of computer is approximately the notion built by the occasional computer user of the 1950s thru 1980s as commonly programmed then. This is reasonable for the book is about neurological workers who mostly were computer users in those days. I had hoped that he was concerned with the question “is 21th century digital circuitry, as done in Silicon valley today, suitable as a substrate for brain stuff?”. My occasionally hostile tone in this long note is perhaps due to this misunderstanding. I am aware of philosophical dispute on whether boolean logic, which underpins modern computer theory, is suitable for brain like behavior. This question is not McCrone’s focus.

This article mocks the ‘science’ of synapse mechanics and locating brain function as a means to understanding the brain. It favors a more ‘psychological’ style of theory. I largely agree. McCrone is midway between synapse stance and the psychological stance and tries to bridge them. I think he is closing in but far from done.

Ultimately I am still a reductionist but probably not in McCrone’s sense. It is clear that the brain challenges reductionistic methodologies.

I think that McCrone abandons my sense of ‘understanding the brain’ which I suppose he would consider ‘reductionist’. I don’t feel that I understand an information process until I can express it as a computer program. I have heard claims that duplicating the brain with code is impossible but McCrone claims no such thing; but he does seem to advocate turning away from that endeavor. Ramachandran, by contrast, tries to understand the brain in a way that seems to me to lead towards implementing it digitally.

Despite all of this I think that the book has material that bears on understanding the brain, even in my sense. McCrone provides evidence that is new to me and that reductionists must consider.

Making a brain is not a project that you can run thru today’s industrial software assembly line. Making a brain may well require somewhat more hardware than we currently assemble, but not not inconceivably more. I suspect that current hardware directions will suffice, but perhaps not. I see no evidence that clocked boolean logic is insufficient.


Here are my notions on consciousness influenced more by Ramachandran.