[162] Quote:
[188] Quote:
[439] Regarding unity of experience, or illusions thereof. I think that the simplest explanation of this is merely that like conventional computer of the year 2000, there is physically in the head one ‘executive’ unit. Just because something can be logically polyinstantiated doesn’t mean that it is. Note that we have just one head.
[500] I get the dour impression of metaphor piled on metaphor. Metaphors are useful when they enlighten by suggesting parallel relations between the familiar and unfamiliar. I notice no relations yet.
Philosophers and neuroscientists both warn against the Cartesian theater wherein some homunculus watches a movie of what transpires. I want to suggest why the theater seems real to many of us. I propose that the conscious experience flows thru short term memory and that it is only as the flow emerges that we become ‘aware of it’. I put scare quotes here as is often necessary when a definition is proposed. Such a definition proposes a different layering of entities—what is composed upon what? The meanings of the quoted terms are thus necessarily impacted.
[573] “The Now Problem” Quote:
I am occasionally reminded that the philosophical conundrums that philosophers like to ascribe to some mysterious brain property, apply equally to the design of a computer program designed to solve some practical dynamic problem. I think the philosopher would think the problem beneath contempt when transcribed to problems on how to design the program. In short the ‘now problem’ is merely the ‘brain problem’.
[599] Quote:
When the problem is enough like nature, as in simulating nature, we can be parallel just the way nature is.
Metzinger misses one aspect of sequentiality; language is sequential. That may be the most important sequentiality!
[667]Quote:
Do computers imagine the future? There was once a commercial plotting device called the “Calcomp Plotter”. (They are still in business 45 years later!) It mechanically moved a pen over the paper and the software controlled the precise timing. There were limits on the acceleration that the pen could undergo and when the pen came to a sharp corner it was necessary to slow down for largely the same reason that a car must slow down to negotiate a sharp curve. This is one of the few cases that I am familiar with where the computer had to plan the future. There was an incentive to go fast and adverse consequences if you went too fast. I did not find a good algorithm.
[793]: Quote:
I think that this chapter may be the first that says something true and relevant.
I think the following should be considered here. When we are perhaps 7 years old we learn about optical illusions. We see a picture of two line segments adorned differently. One segment is obviously longer than the other, until we measure it with something like a ruler. Is it cultural that we accept the ruler’s decision? I think this is the point where we cease being ‘naïve realists’ and that it is innate and not cultural.
[774] Quote:
[841] Discrimination: I have a friend who can distinguish between a pure pitches of 440 and 442 when sounded separately, and similarly other midrange pitches. He can report mid range frequencies within about 1% without reference to a standard frequency.
[918] My only attachment to qualia is the observation in blindsight, some react to the visual shape of an object while they are unaware that they see it. I mean by ‘qualia’ only the awareness.
The chapter on evolution is good.
[947] I like Metzinger’s description of Baar’s ideas. One metaphor that captures much of the chapter is that consciousness is a switchboard. Switchboards were invented so that it was unnecessary to directly connect every phone with every other. Also switchboards introduce bottlenecks and a notion of what is going on ‘Now’. Another metaphor is the stage. A very few items occupy the stage and are exhibited for all [parts of the brain] to consider. In this metaphor nature has not yet invested in more than one stage per brain—it is probably expensive.
The conversation with Wolf Singer is intriguing but too many undefined terms arise. What is “binding”. In all I was somewhat disappointed by Singer’s comments. I would have liked a more detailed description of the experiments that he refers to.
[1333] I am pleased at Metzinger’s description of his philosophical education. I was going to make some remarks on ‘German philosophy’ but Metzinger’s are better and certainly better informed.
[1901] Quote:
I am a materialist. I have come out of my biological shell, at least on occasion, looked back and said “That is all material—including me.”. Parts of my brain still deal with nouns referring to personalities and that is adaptive. Those parts have no interest in the conundrum and might be said to take an immaterialist stance. Other parts are materialistic. Perhaps a student who has learned different subjects in different languages, must translate in order to assimilate his knowledge. Metzinger’s stance might lead to such a translation task. I fear that Metzinger’s philosophy is getting in the way of his science.
Metzinger reverts to a materialistic stance in the next paragraph and all is well.
Perhaps Metzinger inserted that ‘dualist conundrum’ to appeal to the dualists among his readers. Perhaps he did it rhetorically to dissuade them of their dualism.
This is my view on ‘choosing’. That description is meant to cover animals with no semblance of consciousness, as well as some of our own unconscious actions, and ‘premeditated actions’ too.
[1982] Quote:
[2003] The experiments by Wegner and Wheatley show something that is counterintuitive but more or less expected—it is possible to induce the feeling that you decided to act when you did not. This opens the possibility that the choice of focus is sometimes, or usually subconscious, even when the focus is conscious. ‘Sense of agency’ is the feeling that it was your conscious choice. But how is sense of agency adaptive? I usually makes you feel good but that is not good enough. There seems to be some mechanism that minimizes conflict within the head and this would support a sense of ‘self’. The confabulator is implicated here.
[2034] I now think that Metzinger is a pure materialist and that his subjective and philosophical terminology is designed to bring along the dualists among his readers.
[2054] The ‘phenomenal narrative’ is how we perceive our world including us.
[2056] I like Metzinger’s explanation of our sense of having made a decision. I think he is right and that this is a piece of the puzzle that it is nice to have in place. It may be like other possible explanations of illusions.
[2065] Quote: “Often the brain is blind to its own workings, as it were.” So is the computer, and even most software AI most of the time.
[2162] False Awakenings: The movie “Existenz” is a fun exploration of computer games. During the movie there are several ‘false awakenings’. The fidelity of such depictions is less relevance here than the fact that the movie audience finds it entertaining. The relevance goes well beyond dream theory; it bears on the whole idea of hypothetical world models.
[2478] I like Allan Hobson’s ideas on dreams!
[2687] Of Mirror Neurons Quote: “They are activated when another agent is observed using objects in a purposeful way.”
This surprises me.
It requires a notion of how the observer detects purposefulness.
The ‘vocabulary’ take on mirror neurons is consonant with notion that they are to communicate.
[3010] I think Metzinger goes off track starting here.
[3025] If I understand what Metzinger is saying here the most hard AI systems are conscious. The conditions that he lays out here do not require self awareness or perhaps when he says ‘world model’ he means to include self.
[3039] Quote: “It will believe in itself”. I think this is the first use of “believe”.
[3148] I wonder about the ‘ethics’ of a machine the does good mathematics and enjoys that. Would that satisfy Nozick?
[3148] Quote:
[3160] “Evolution as such is not a process to be glorified: …” I disagree. I think that ‘joy’ of humans may have no correlate in the machines we build. Evolution equipped us with such for reasons Metzinger describes well. The reason that evolution endowed us with that does not bear on the designs of machines we build. I have not decided whether we should have the mathmaticians that we create enjoy math. Perhaps we can turn that feature on and off.
[3186:3257] I enjoy Metzinger’s conversation with a far future AI. He goes almost far enough to make my point that the AIs that we make will not be like us. A central question, I think, is how much of the AI’s brain function will be accessible to the AI? I suspect rather more than ours, but I also expect limits. This alone will make them much different from us.
In all I think that Metzinger’s scenario is as plausible as any other I have heard, which means not very plausible. I hope it turns out as well as he suggests. I imagine that in his world there would be ‘antiquarian’ AI’s whose hobby was to talk with humans.
Metzinger seems to wish away Darwinian selection in his postbiotic world. I am skeptical.
[3258] Quote: “The Ego is a tool—one that evolved for controlling and predicting your behavior and understanding the behavior of others.” I think influencing the behavior of others is also a significant adaptive element of the Ego.
[3428] Here begins Metzinger’s worries and warnings. The dangers he describes sound real, but to me not different in type than the altered states that civilization has already learned to deal with, if imperfectly. I am not convinced that what is coming is worse. If worse comes to worst we will fall bak on Darwin’s means for weeding out the dumb; that would be unfortunate.
[3514] Quote: “Some will argue that a system like the human brain, which has been optimized over millions of years, cannot be further optimized without losing a degree of its stability.” I think this is mostly true, but the question is what the optimization was for. As to “Why should we be neurophenomenological Luddites” there is the conundrum is “How do we choose what to want?”.
In this area at least I hew to Feynman’s dictum: a theory is no better than its predictions. Ontology is part of theory. I would be surprised to see an explanation of claims of hunger simpler than the naïve one. Perhaps the eliminativists assume that any phenomenon in the head must be tightly localized. Knowing the phenomena in computers are not localized, I am not sympathetic to this limitation.
Ultimately Metzinger, like Ramachandran, tries to explain as a computer scientist would. His use of philosophical terminology will put many off. I suppose that his efforts will tend to bring neurology and philosophy together somewhat.
Metzinger gives relatively few experimental results to support his ideas. He quotes the work of others but I suspect that he relies too much on plausibility. I find many of his ideas plausible.