A Universe Of Consciousness
How Matter Becomes Imagination
Edelman, Gerald; Tononi, Giulio (2008-08-01)
Some of the following quotes and reactions thereto sound like I am resisting the authors.
Sometimes I am but sometimes I may be merely anticipating them—anxious to get to the point or describing the situation from another perspective.
At one third thru the book I must report that I do not trust many of the author’s conclusions, but I do find useful ideas and proposals.
The authors seem to have some notion of how computers and their programs work that is alien to me, a computer professional.
[L 189] Other materialistic positions insist that although consciousness is generated by physical events in the brain, it is not reduced to them but, rather, emerges from them, just as the properties of water emerge from the chemical combination of two hydrogens and one oxygen but are not directly reducible to the properties of hydrogen or oxygen alone.
I think that it is perfectly permissible to ‘explain away’ consciousness except for one pragmatic consideration; we do need to explain our subjective impressions of how parts of our subjective worlds work, just as we need to explain our subjective impressions of how part of the physical world work.
[L 403] Why should the simple differentiation between light and dark performed by the human being be associated with and, indeed, require conscious experience, while that performed by the photodiode presumably does not?
I think that there is a simple answer here.
The human is asked to express the answer in a natural language—indeed to learn of the task via a natural language.
I suspect that a person could acquire reflexes to unconsciously push a button seeing upon light.
Just try asking the photodiode to do something else instead.
You programed the person thru language.
How did you program the photodiode?
L 615: The authors discuss what I would call the bandwidth of consciousness.
I think rather what they describe is more like the bandwidth of qualia.
Perhaps they identify the two.
I use ‘consciousness’ as closely connected to qualia, but distinct.
[L 647] The enormous variety of discriminable states available to a conscious human being is clearly many orders of magnitude larger than those available to anything we have built.
I think that the authors confuse what is on call by (available to) consciousness, with what is consciously transpiring.
As of the beginning of Part Two I think that we do not seriously disagree except on terminology.
I use consciousness in a more restricted way, excluding qualia and memory, both of which are real and relevant to consciousness.
[L 749] Neurons come in two flavors, excitatory and inhibitory, and at the microscopic level, their synapses have different and characteristic structures.
That’s news to me and quite surprising.
I had assumed that only synapses were polar.
I think that nature missed a trick.
Circuit designers would feel limited by this restriction.
In support of the notion that the brain is not like a computer:
[L 878] First, the world certainly is not presented to the brain like a piece of computer tape containing an unambiguous series of signals.
Nonetheless, the brain enables an animal to sense the environment, categorize patterns out of a multiplicity of variable signals, and initiate movement.
The signals from the ‘eyes’ of a self driving car are delivered as data to a normal computer which thereby drives pretty well.
I believe that there are large parts of the brain, probably a substantial majority, with no computer hardware counterparts, but that the rest of the brain supports profitable analogies with a computer.
In particular a CPU pays ‘attention’ to some situation much as the consciousness pays attention to a continuing set of stimuli.
Neither is distracted without consequence.
Todays computers have a hard time with salience and programs have a difficult time estimating salience.
Nature has some tricks that we need to learn.
I think that they are much older than consciousness.
Consciousness evolved in the context of knowing what was salient.
New hardware may not be necessary, yet be cost effective.
Today hardware interrupts are a partial solution.
[L 900] … reentry allows for a unity of perception and behavior that would otherwise be impossible, given the absence in the brain of a unique, computerlike central processor with detailed instructions or of algorithmic calculations for the coordination of functionally segregated areas.
‘Reentry’ seems to refer to an overall circular stream of information thru the brain.
I agree that there are no instructions detailed as they are in current computers, but there are frames drawn from experience and perhaps from our DNA that serve somewhat the same rôle.
Frames depict patterns which the circulating data may or may not fit.
Upon fitting these frames suggest actions.
(The computer language prolog is along these lines.)
This pattern fitting process is unconscious but the results are usually accessible.
We may become aware of competition between frames.
Patterns described by these frames are processed by a general frame hardware which can deal with very few frames at a time.
Competition for this hardware leads and limits us to serial attention that is a hallmark of consciousness.
Older more important patterns got their own dedicated hardware, either specified in our DNA or allocated on demand.
Such special hardware runs mainly outside our consciousness and runs concurrently, just like dedicated computer hardware.
[L 906] In any event, communication nets are unlike brains, in that they deal with previously coded and, for the most part, unambiguous signals.
Ambiguity is in the eye of the beholder.
[L 909] It is not easy to provide a metaphor that captures all the properties of reentry.
Try “systolic array”.
OK that is not quite enough but such hardware is often configured in loops for special problems.
In current hardware these designs are highly specialized whereas the brain is wildly flexible.
We have much to learn from the brain.
I like the string quartet metaphor.
That there is a circular flow is news to me and suggests that nature needed several faculties to achieve the purposes of consciousness.
These ideas should be contrasted to reflexes which may have been an evolutionary precursor.
At this point in the book I do not know whether the authors expect the reader to have memorized the names of the parts of the brain and what they do.
There has been such specialized information but it does not stick in my head.
The few images are small and unclear.
[L 1035] A Lesson from Practice: Conscious versus Automatic Performance
The authors make good points in this section that are obvious but with unobvious application to consciousness theory.
We consciously build reflexes as we learn tasks whereupon they become unconscious.
[L 1120] It is as if, at first, an initially distributed and large set of cortical specialists meets to try to address a task.
Soon they reach a consensus about who among them is best qualified to deal with it, and a task force is chosen.
Subsequently, the task force recruits the help of a local, smaller group to perform the task rapidly and flawlessly.
I would have invoked memory here where the sequence of acts was coded somehow.
The two scenarios should be distinguishable experimentally.
As I play a familiar piece of music I am unable to carry out other activities that require keeping rhythm.
So far what the authors have been speaking of what I would relegate to attention which is part of consciousness.
[L 1249] All these results suggest that ongoing reentrant interactions between multiple brain areas are required for a stimulus to be consciously perceived.
It seems to me that perception without transfer to memory might just as well be called null and void.
I am concerned that there is no talk about forming even short term memories.
The authors seem to treat memory as an epiphenomenon.
[L 1256] He found that the onset of the readiness potential invariably preceded such awareness by an average of about 350 milliseconds and by a minimum of about 150 milliseconds.
This measurement is fraught with all sorts of latency illusions.
One such problem is latency thru the visual system.
Since we evolved to compensate for these latencies we are unlikely to be able to report such events accurately.
“PART THREE” promises to follow the evolutionary epistemology path.
We shall see.
I am disappointed that memory has been mentioned so little so far.
[L 1365] In other words, this integrated mental scene is a “remembered present.”
Excellent—a notion, well expressed, which I have been pushing for several years.
I think that forming memories is a large part of the transformation of the information in our head into ‘knowledge’ about which we can think and speak.
There are some emotions that I can sort of remember, but mainly I remember much better having had those emotions which is a stage removed from remembering the emotions.
[L1399] As we have discussed, no two brains are alike, and each individual’s brain is continually changing.
Variations extend over all levels of brain organization, from biochemistry to gross morphology, and the strengths of myriad individual synapses are constantly altered by experience.
It is not variation in brain structure between individuals that is evolutionarily important, it is variation in genetics that account for just a part of the individual variation.
[L 1401] The extent of this enormous variability argues strongly against the notion that the brain is organized like a computer with fixed codes and registers.
Where do the authors get the idea that computer codes are fixed?
It is true that the variation between two computers from the same assembly line are extraordinarily small, but that is a poor argument for avoiding comparisons.
The content of the computer’s memory is highly varied.
On later re-reading I notice that the quotation can be parsed to claim that the brain is not like a computer with fixed code but might be like a computer with variable code.
I doubt that this latter was the intended meaning.
The whole section at the beginning of chapter seven (titled “Selectionism”) seems devoted to warning against using anything you know about computers to reason about the brain.
Several of the claims are clearly incompatible with the fact that computers drive cars today.
The first computers we built were patterned after our conscious rational mind largely because those patterns were subjectively available to us and the problems that we understood how to get the computers to solve involved long deterministic sequence of instructions.
Driving a car is not in that category but it was not necessary to change the computer hardware but merely the programs we provided.
I think that small changes to the hardware of the computer may be warranted when self driving cars become more common.
The new hardware will merely do what conventional computers do, but more efficiently.
Brain science may be germane to such improvements.
[L 1417] Population thinking centers on the idea that variations among individuals of a species provide the basis for natural selection in the struggle for existence that eventually leads to the origin of other species.
I don’t know whether the authors are referring to brain variation due to variations in
- DNA,
- noise during morphogenesis or
- different life experiences.
I adhere to the central dogma that only the first leads to speciation.
The second and third are very useful to a species with language in the support of memes that spread thru the population.
Finally at L 1440 the authors begin to get specific.
They speak of “natural selection” for the DNA sort, and “somatic selection” for the last category of experience, or perhaps even a forth category which plays out during the solution to some problem.
[L 1481] It is important to emphasize that reentry is not feedback.
Feedback occurs along a single fixed loop made of reciprocal connections using previous instructionally derived information for control and correction, such as an error signal.
In contrast, reentry occurs in selectional systems across multiple parallel paths where information is not prespecified.
Like feedback, however, reentry can be local (within a map) or global (among maps and whole regions).
I accuse the authors of excluding from “reentry” any signal back-flow that is understood or invented by people.
When nature invents an analog of the simple steam engine governor, (which it certainly has) is it then no longer reentrant?
When we learn to explain the ‘meaning’ of a back signal, is it no longer reentrant? (driving cars, playing chess, proving theorems)
This reminds me of the rhetoric about AI which deems goals, once considered part of AI to be no longer AI when achieved.
It feels like an attitude of worshiping the incomprehensible; it is demoted when comprehended.
I will grant only that we may be able to make useful theories about back signals without providing semantics to such signals.
We have understood enough such natural signals, and invented many of our own, to often gloss over the details.
William Calvin reports some notions of evolution within the brain on time scales of solving a problem.
I am sceptical of such notions.
The figure at L 1571 has a legend describing a computer solving ‘value problem’ using a computer, but not with a “conventional computer program”.
“CHAPTER EIGHT”
The authors keep using evolutionary terms such as “selectionism” even as the emphasize that they are invoking evolution at two different levels.
Some of the occurrences of these terms fit only one of these two invocations and I am often confused which sort it being used.
[L 1620] The problem the brain confronts is that signals from the world do not generally represent a coded input.
Have they never heard of Siri or computer face recognition?
Figure 8.1: It seems that “map” means a bundle of one directional nerves.
This is a peculiar usage, but not without connection to my usage.
I am just about to give up on this book.
If the authors spent some time reviewing computer science on how these problems were addressed and also Kanerva’s contributions I think he would have many fewer passages of the nature that the brain is not like a computer.
Have they understood the semantics of data structures with pointers?
It sounds as if they have not been near a computer since about 1960, and then not at the forefront.
I think the authors have never heard of Bloom filters.
It is as if they never heard of self-driving cars.
They seem ignorant of the difference between code and data.
(Kindle Locations 189-191). Basic Books. Kindle Edition.
1662