A Universe Of Consciousness
How Matter Becomes Imagination
Edelman, Gerald; Tononi, Giulio (2008-08-01)

Some of the following quotes and reactions thereto sound like I am resisting the authors. Sometimes I am but sometimes I may be merely anticipating them—anxious to get to the point or describing the situation from another perspective.

At one third thru the book I must report that I do not trust many of the author’s conclusions, but I do find useful ideas and proposals. The authors seem to have some notion of how computers and their programs work that is alien to me, a computer professional.

I think that it is perfectly permissible to ‘explain away’ consciousness except for one pragmatic consideration; we do need to explain our subjective impressions of how parts of our subjective worlds work, just as we need to explain our subjective impressions of how part of the physical world work. I think that there is a simple answer here. The human is asked to express the answer in a natural language—indeed to learn of the task via a natural language. I suspect that a person could acquire reflexes to unconsciously push a button seeing upon light. Just try asking the photodiode to do something else instead. You programed the person thru language. How did you program the photodiode?

L 615: The authors discuss what I would call the bandwidth of consciousness. I think rather what they describe is more like the bandwidth of qualia. Perhaps they identify the two. I use ‘consciousness’ as closely connected to qualia, but distinct.

I think that the authors confuse what is on call by (available to) consciousness, with what is consciously transpiring.

As of the beginning of Part Two I think that we do not seriously disagree except on terminology. I use consciousness in a more restricted way, excluding qualia and memory, both of which are real and relevant to consciousness.

That’s news to me and quite surprising. I had assumed that only synapses were polar. I think that nature missed a trick. Circuit designers would feel limited by this restriction.

In support of the notion that the brain is not like a computer:

The signals from the ‘eyes’ of a self driving car are delivered as data to a normal computer which thereby drives pretty well. I believe that there are large parts of the brain, probably a substantial majority, with no computer hardware counterparts, but that the rest of the brain supports profitable analogies with a computer. In particular a CPU pays ‘attention’ to some situation much as the consciousness pays attention to a continuing set of stimuli. Neither is distracted without consequence.

Todays computers have a hard time with salience and programs have a difficult time estimating salience. Nature has some tricks that we need to learn. I think that they are much older than consciousness. Consciousness evolved in the context of knowing what was salient. New hardware may not be necessary, yet be cost effective. Today hardware interrupts are a partial solution.

‘Reentry’ seems to refer to an overall circular stream of information thru the brain. I agree that there are no instructions detailed as they are in current computers, but there are frames drawn from experience and perhaps from our DNA that serve somewhat the same rôle. Frames depict patterns which the circulating data may or may not fit. Upon fitting these frames suggest actions. (The computer language prolog is along these lines.) This pattern fitting process is unconscious but the results are usually accessible. We may become aware of competition between frames. Patterns described by these frames are processed by a general frame hardware which can deal with very few frames at a time. Competition for this hardware leads and limits us to serial attention that is a hallmark of consciousness. Older more important patterns got their own dedicated hardware, either specified in our DNA or allocated on demand. Such special hardware runs mainly outside our consciousness and runs concurrently, just like dedicated computer hardware. Ambiguity is in the eye of the beholder. Try “systolic array”. OK that is not quite enough but such hardware is often configured in loops for special problems. In current hardware these designs are highly specialized whereas the brain is wildly flexible. We have much to learn from the brain. I like the string quartet metaphor.

That there is a circular flow is news to me and suggests that nature needed several faculties to achieve the purposes of consciousness.

These ideas should be contrasted to reflexes which may have been an evolutionary precursor.

At this point in the book I do not know whether the authors expect the reader to have memorized the names of the parts of the brain and what they do. There has been such specialized information but it does not stick in my head. The few images are small and unclear.

The authors make good points in this section that are obvious but with unobvious application to consciousness theory. We consciously build reflexes as we learn tasks whereupon they become unconscious. I would have invoked memory here where the sequence of acts was coded somehow. The two scenarios should be distinguishable experimentally. As I play a familiar piece of music I am unable to carry out other activities that require keeping rhythm.

So far what the authors have been speaking of what I would relegate to attention which is part of consciousness.

It seems to me that perception without transfer to memory might just as well be called null and void. I am concerned that there is no talk about forming even short term memories. The authors seem to treat memory as an epiphenomenon. This measurement is fraught with all sorts of latency illusions. One such problem is latency thru the visual system. Since we evolved to compensate for these latencies we are unlikely to be able to report such events accurately.

“PART THREE” promises to follow the evolutionary epistemology path. We shall see. I am disappointed that memory has been mentioned so little so far.

Excellent—a notion, well expressed, which I have been pushing for several years. I think that forming memories is a large part of the transformation of the information in our head into ‘knowledge’ about which we can think and speak. There are some emotions that I can sort of remember, but mainly I remember much better having had those emotions which is a stage removed from remembering the emotions. It is not variation in brain structure between individuals that is evolutionarily important, it is variation in genetics that account for just a part of the individual variation. Where do the authors get the idea that computer codes are fixed? It is true that the variation between two computers from the same assembly line are extraordinarily small, but that is a poor argument for avoiding comparisons. The content of the computer’s memory is highly varied.

On later re-reading I notice that the quotation can be parsed to claim that the brain is not like a computer with fixed code but might be like a computer with variable code. I doubt that this latter was the intended meaning.

The whole section at the beginning of chapter seven (titled “Selectionism”) seems devoted to warning against using anything you know about computers to reason about the brain. Several of the claims are clearly incompatible with the fact that computers drive cars today. The first computers we built were patterned after our conscious rational mind largely because those patterns were subjectively available to us and the problems that we understood how to get the computers to solve involved long deterministic sequence of instructions. Driving a car is not in that category but it was not necessary to change the computer hardware but merely the programs we provided. I think that small changes to the hardware of the computer may be warranted when self driving cars become more common. The new hardware will merely do what conventional computers do, but more efficiently. Brain science may be germane to such improvements.

I don’t know whether the authors are referring to brain variation due to variations in I adhere to the central dogma that only the first leads to speciation. The second and third are very useful to a species with language in the support of memes that spread thru the population. Finally at L 1440 the authors begin to get specific. They speak of “natural selection” for the DNA sort, and “somatic selection” for the last category of experience, or perhaps even a forth category which plays out during the solution to some problem. I accuse the authors of excluding from “reentry” any signal back-flow that is understood or invented by people. When nature invents an analog of the simple steam engine governor, (which it certainly has) is it then no longer reentrant? When we learn to explain the ‘meaning’ of a back signal, is it no longer reentrant? (driving cars, playing chess, proving theorems) This reminds me of the rhetoric about AI which deems goals, once considered part of AI to be no longer AI when achieved. It feels like an attitude of worshiping the incomprehensible; it is demoted when comprehended. I will grant only that we may be able to make useful theories about back signals without providing semantics to such signals. We have understood enough such natural signals, and invented many of our own, to often gloss over the details.

William Calvin reports some notions of evolution within the brain on time scales of solving a problem. I am sceptical of such notions.

The figure at L 1571 has a legend describing a computer solving ‘value problem’ using a computer, but not with a “conventional computer program”.

“CHAPTER EIGHT” The authors keep using evolutionary terms such as “selectionism” even as the emphasize that they are invoking evolution at two different levels. Some of the occurrences of these terms fit only one of these two invocations and I am often confused which sort it being used.

Have they never heard of Siri or computer face recognition?

Figure 8.1: It seems that “map” means a bundle of one directional nerves. This is a peculiar usage, but not without connection to my usage.

I am just about to give up on this book. If the authors spent some time reviewing computer science on how these problems were addressed and also Kanerva’s contributions I think he would have many fewer passages of the nature that the brain is not like a computer. Have they understood the semantics of data structures with pointers? It sounds as if they have not been near a computer since about 1960, and then not at the forefront.


I think the authors have never heard of Bloom filters.
It is as if they never heard of self-driving cars.
They seem ignorant of the difference between code and data.
(Kindle Locations 189-191). Basic Books. Kindle Edition. 1662