Thomas Metzinger’s “The Ego Tunnel”

Metzinger takes an entirely subjective perspective. I have no objection since there should eventually be a subjective story, or at least an objective explanation of the subjective experience.

[162] Quote:

I want to add that it is also a small fraction of what actually exists inside.

[188] Quote:

Not all behavior involves the ego. Many animals have behavior but no ego. The air conditioning AI needs no ego to behave well. Metzinger’s description covers what I recount here.

[439] Regarding unity of experience, or illusions thereof. I think that the simplest explanation of this is merely that like conventional computer of the year 2000, there is physically in the head one ‘executive’ unit. Just because something can be logically polyinstantiated doesn’t mean that it is. Note that we have just one head.

[500] I get the dour impression of metaphor piled on metaphor. Metaphors are useful when they enlighten by suggesting parallel relations between the familiar and unfamiliar. I notice no relations yet.

Philosophers and neuroscientists both warn against the Cartesian theater wherein some homunculus watches a movie of what transpires. I want to suggest why the theater seems real to many of us. I propose that the conscious experience flows thru short term memory and that it is only as the flow emerges that we become ‘aware of it’. I put scare quotes here as is often necessary when a definition is proposed. Such a definition proposes a different layering of entities—what is composed upon what? The meanings of the quoted terms are thus necessarily impacted.

[573] “The Now Problem” Quote:

I disagree. Scientists aspire to complete stories spanning time just as novelists do. They do not achieve much more success. Scientists sometimes find formulae which relate a few nearby moments of time, rather like our subjective experience of time. It is true that a good formula is oblivious to the current time.

I am occasionally reminded that the philosophical conundrums that philosophers like to ascribe to some mysterious brain property, apply equally to the design of a computer program designed to solve some practical dynamic problem. I think the philosopher would think the problem beneath contempt when transcribed to problems on how to design the program. In short the ‘now problem’ is merely the ‘brain problem’.

[599] Quote:

I think there is some terminological nonsense here but also some truth. I think also that the organization he speaks of here is also the organization of how computers work as they run software that has been written in the last 60 years. I speak of what program theorists call imperative programming—do this and when you are done do that … . Probably we program our computers to work that way because we work that way too—it is the way we understand to get things done—it is what we observe ourselves consciously doing! With multi core computers arriving there is great consternation in the attempt to get many things going at once. The problems that are solved by sequentiality, that Metzinger refers to, need to be solved and our programs have seldom transcended how our brains work. This bodes poorly for the computer industry, we can’t rely on personal experience and we can’t even be sure that there is a solution.

When the problem is enough like nature, as in simulating nature, we can be parallel just the way nature is.

Metzinger misses one aspect of sequentiality; language is sequential. That may be the most important sequentiality!

[667]Quote:

I claim that computers are another example, and indeed many prior ‘clock-work’ mechanisms designed to step through contingent phases to some end. You might say that the Jacquard loom had a Now, towards the end of producing a product from which the artifact of Now had been removed. That loom was contingent unless you included the cards.

Do computers imagine the future? There was once a commercial plotting device called the “Calcomp Plotter”. (They are still in business 45 years later!) It mechanically moved a pen over the paper and the software controlled the precise timing. There were limits on the acceleration that the pen could undergo and when the pen came to a sharp corner it was necessary to slow down for largely the same reason that a car must slow down to negotiate a sharp curve. This is one of the few cases that I am familiar with where the computer had to plan the future. There was an incentive to go fast and adverse consequences if you went too fast. I did not find a good algorithm.

[793]: Quote:

I think a plainer and easier explanation is that the transparent sort logically had to come first. There was no need of awareness of awareness until there was awareness. Consciousness and qualia are the beginnings of non-transparency and are indeed evolving ‘as fast as they can’, subject, of course, to cost-benefit tradeoffs.

I think that this chapter may be the first that says something true and relevant.

I think the following should be considered here. When we are perhaps 7 years old we learn about optical illusions. We see a picture of two line segments adorned differently. One segment is obviously longer than the other, until we measure it with something like a ruler. Is it cultural that we accept the ruler’s decision? I think this is the point where we cease being ‘naïve realists’ and that it is innate and not cultural.

[774] Quote:

Claiming that it is invention seems to imply that earlier it was not transparent. I think that that is backwards.

[841] Discrimination: I have a friend who can distinguish between a pure pitches of 440 and 442 when sounded separately, and similarly other midrange pitches. He can report mid range frequencies within about 1% without reference to a standard frequency.

[918] My only attachment to qualia is the observation in blindsight, some react to the visual shape of an object while they are unaware that they see it. I mean by ‘qualia’ only the awareness.

The chapter on evolution is good.

[947] I like Metzinger’s description of Baar’s ideas. One metaphor that captures much of the chapter is that consciousness is a switchboard. Switchboards were invented so that it was unnecessary to directly connect every phone with every other. Also switchboards introduce bottlenecks and a notion of what is going on ‘Now’. Another metaphor is the stage. A very few items occupy the stage and are exhibited for all [parts of the brain] to consider. In this metaphor nature has not yet invested in more than one stage per brain—it is probably expensive.

The conversation with Wolf Singer is intriguing but too many undefined terms arise. What is “binding”. In all I was somewhat disappointed by Singer’s comments. I would have liked a more detailed description of the experiments that he refers to.

[1333] I am pleased at Metzinger’s description of his philosophical education. I was going to make some remarks on ‘German philosophy’ but Metzinger’s are better and certainly better informed.

[1901] Quote:

This selection is interesting on several counts. I suppose that the extremes to which he refers are:
Berkeley’s Immaterialism
in which all is soul (God’s, when nothing else is available).
Materialism
in which all is physical and consciousness is merely an emergent phenomenon.
Few find either of these extremes comfortable but mixing them leads to ‘dualist conundrums’ such as Metzinger’s above.

I am a materialist. I have come out of my biological shell, at least on occasion, looked back and said “That is all material—including me.”. Parts of my brain still deal with nouns referring to personalities and that is adaptive. Those parts have no interest in the conundrum and might be said to take an immaterialist stance. Other parts are materialistic. Perhaps a student who has learned different subjects in different languages, must translate in order to assimilate his knowledge. Metzinger’s stance might lead to such a translation task. I fear that Metzinger’s philosophy is getting in the way of his science.

Metzinger reverts to a materialistic stance in the next paragraph and all is well.

Perhaps Metzinger inserted that ‘dualist conundrum’ to appeal to the dualists among his readers. Perhaps he did it rhetorically to dissuade them of their dualism.

This is my view on ‘choosing’. That description is meant to cover animals with no semblance of consciousness, as well as some of our own unconscious actions, and ‘premeditated actions’ too.

[1982] Quote:

This is an interesting observation that could surprise only a dualist or immaterialist.

[2003] The experiments by Wegner and Wheatley show something that is counterintuitive but more or less expected—it is possible to induce the feeling that you decided to act when you did not. This opens the possibility that the choice of focus is sometimes, or usually subconscious, even when the focus is conscious. ‘Sense of agency’ is the feeling that it was your conscious choice. But how is sense of agency adaptive? I usually makes you feel good but that is not good enough. There seems to be some mechanism that minimizes conflict within the head and this would support a sense of ‘self’. The confabulator is implicated here.

[2034] I now think that Metzinger is a pure materialist and that his subjective and philosophical terminology is designed to bring along the dualists among his readers.

[2054] The ‘phenomenal narrative’ is how we perceive our world including us.

[2056] I like Metzinger’s explanation of our sense of having made a decision. I think he is right and that this is a piece of the puzzle that it is nice to have in place. It may be like other possible explanations of illusions.

[2065] Quote: “Often the brain is blind to its own workings, as it were.” So is the computer, and even most software AI most of the time.

[2162] False Awakenings: The movie “Existenz” is a fun exploration of computer games. During the movie there are several ‘false awakenings’. The fidelity of such depictions is less relevance here than the fact that the movie audience finds it entertaining. The relevance goes well beyond dream theory; it bears on the whole idea of hypothetical world models.

[2478] I like Allan Hobson’s ideas on dreams!

[2687] Of Mirror Neurons Quote: “They are activated when another agent is observed using objects in a purposeful way.”
This surprises me. It requires a notion of how the observer detects purposefulness. The ‘vocabulary’ take on mirror neurons is consonant with notion that they are to communicate.

[3010] I think Metzinger goes off track starting here.

[3025] If I understand what Metzinger is saying here the most hard AI systems are conscious. The conditions that he lays out here do not require self awareness or perhaps when he says ‘world model’ he means to include self.

[3039] Quote: “It will believe in itself”. I think this is the first use of “believe”.

[3148] I wonder about the ‘ethics’ of a machine the does good mathematics and enjoys that. Would that satisfy Nozick?

[3148] Quote:

I don’t recall any arguments that an artificial ego would be anything like our psychological structure. Indeed that would greatly surprise me. We might ultimately achieve that if we tried.

[3160] “Evolution as such is not a process to be glorified: …” I disagree. I think that ‘joy’ of humans may have no correlate in the machines we build. Evolution equipped us with such for reasons Metzinger describes well. The reason that evolution endowed us with that does not bear on the designs of machines we build. I have not decided whether we should have the mathmaticians that we create enjoy math. Perhaps we can turn that feature on and off.

[3186:3257] I enjoy Metzinger’s conversation with a far future AI. He goes almost far enough to make my point that the AIs that we make will not be like us. A central question, I think, is how much of the AI’s brain function will be accessible to the AI? I suspect rather more than ours, but I also expect limits. This alone will make them much different from us.

In all I think that Metzinger’s scenario is as plausible as any other I have heard, which means not very plausible. I hope it turns out as well as he suggests. I imagine that in his world there would be ‘antiquarian’ AI’s whose hobby was to talk with humans.

Metzinger seems to wish away Darwinian selection in his postbiotic world. I am skeptical.

[3258] Quote: “The Ego is a tool—one that evolved for controlling and predicting your behavior and understanding the behavior of others.” I think influencing the behavior of others is also a significant adaptive element of the Ego.

[3428] Here begins Metzinger’s worries and warnings. The dangers he describes sound real, but to me not different in type than the altered states that civilization has already learned to deal with, if imperfectly. I am not convinced that what is coming is worse. If worse comes to worst we will fall bak on Darwin’s means for weeding out the dumb; that would be unfortunate.

[3514] Quote: “Some will argue that a system like the human brain, which has been optimized over millions of years, cannot be further optimized without losing a degree of its stability.” I think this is mostly true, but the question is what the optimization was for. As to “Why should we be neurophenomenological Luddites” there is the conundrum is “How do we choose what to want?”.


Metzinger use the term “Eliminative Materialism” several times.
This is a good article on Eliminative Materialism. I don’t understand whether it teaches that there is no such state as desiring food. I don’t know what that would mean. I can vaguely imagine a description of the world that posits some peculiar mechanism that implants in our memories a pattern that causes us to say that we were hungry. That seems to me to fall badly afoul of Occam.

In this area at least I hew to Feynman’s dictum: a theory is no better than its predictions. Ontology is part of theory. I would be surprised to see an explanation of claims of hunger simpler than the naïve one. Perhaps the eliminativists assume that any phenomenon in the head must be tightly localized. Knowing the phenomena in computers are not localized, I am not sympathetic to this limitation.


I pause here [2697] to note a problem and vague proposal of a solution. In natural and mathematical languages we have predicates that take several arguments. In natural languages we have modality words such as ‘would’, ‘should’, ‘might’ and tense constructions to refer to actions and facts that are either hypothetical or not current. I think that consciousness holds some of these constant for short periods of time. When we speak the tense goes on every verb, at least in the western languages that I know anything about.
I am surprised that Metzinger does not ask the question: “Need we include self models in the AIs we build?”. I largely agree with Metzinger’s notions of why they arose but that does not imply that we must or should include such a function. Indeed the reasons that Metzinger suggests for the adaptivity of the Ego do not apply to AI.
Metzinger writes as a philosopher but seems to address the naïve realist. This seems to me an unusual combination. He usually explains special meanings that philosophers use certain words for. He misses a few.

Ultimately Metzinger, like Ramachandran, tries to explain as a computer scientist would. His use of philosophical terminology will put many off. I suppose that his efforts will tend to bring neurology and philosophy together somewhat.

Metzinger gives relatively few experimental results to support his ideas. He quotes the work of others but I suspect that he relies too much on plausibility. I find many of his ideas plausible.