SEPTEMBER 13, 1996



An Attempt to Explore the Hypothesis that the Universe as a Whole might be a Self-Constructing, Coevolving Community of Autonomous Agents that Maximizes the Sustainable Growth in its own Total Effective Dimensionality. Or More Generally, the Universe Follows a Preferred Path Towards Maximum Complexity With Exchange of Mass and Space, Because Maximum Complexity Via Growth Into the Fastest Expanding Adjacent Possible Maximizes Decoherence into Classicity. Maximum Complexity as Attractor Poised Between Universe Expanding and Contracting.


To brazen a leap to cosmology is to be countenanced only with disclaimers. Among them, why not? This is an informal "Investigation." But beyond this, the previous six "lectures" give modestly good grounds to think that a general attractor just might, in fact, govern the coevolution of the molecular and higher order autonomous agents that comprise the biosphere and "econosphere."

Central to the emerging picture is the possibility that the non-equilibrium, non-ergodic behavior of such a coupled system of functional "wholes" will tend to expand the dimensionality of its "work space" as fast as is sustainably possible.

The concepts concerning autonomous agents and functional closure in a space of tasks such that the system is autocatalytic and carries out work cycles are objectively verifiable properties of a physical system. These new concepts underlie the tentative understanding of Agents as loci of propagating work-record that have actually, in the real physical world, built up the vast coordinated diversified complexity of the biosphere and econosphere.

Nothing in the concept of an Agent demands that it be made of organic molecules. Thus, where-ever we may find a natural interpretation for Autonomous Agents that coevolve with one another in communities and niches, we may seek to apply the same framework.

Perhaps I may begin with two intuitions:

First, the candidate laws governing the biosphere suggest that the biosphere has, and will, generate some form of high diversity.

But our Universe is also extremely complex. It is not obvious that a Universe governed by fundamental physical laws as currently known should be as complex as is ours. As Lee Smolin argues in his forthcoming book, "The Life of the Cosmos," the chance that the 17 or so constants in the Standard Model in particle physics would be such so as to sustain a Universe complex enough to have complex chemistry and stars is on the order of 10 to the -229. Assuming Smolin's arguments are only roughly correct, there seems no a priori grounds to suppose that our Universe would be so complex.

Among the attempts to account for the complexity of this Universe are the strong and weak Anthropic principles. The former states that, in some sense, the universe is as it is so that complex life can have come to exist. The more acceptable weak version states that because the Universe is as it is, for example, with the constants of the Standard Model as they are, complex life can have come to exist. Since such life is "necessary" in order for observers to know that they are in a Universe, it follows that the only knowable Universes are those complex enough to have complex life.

Other attempts to account for the complexity of the Universe include Smolin's own tentative candidate: cosmic natural selection. Here baby universes are born in black holes, and inherit slightly modified values of the constants. Cosmic natural selection then acts to select universes that maximize their number of black holes and so maximize the number of offspring universes.

And, of course, the deep hope of many theorists, as Smolin points out, is that some fundamental theory will have no adjustable parameters at all and yield the current universe as its expected outcome.

Lacking such a fundamental theory with no adjustable parameters, any theory will have to account for how such parameters come to have the values that they do in order to explain the complexity of the Universe. Thus, as Smolin again suggests, it is not foolish to ask if some form of "self-organization" in an historically evolving Universe may play a fundamental role.

The brave and perhaps foolish tack I wish to take is to make use of the tentative insights from our candidate laws: If the biosphere and econosphere might be expanding the total dimensionality of their work space as fast as is sustainably possible, might such a candidate "fourth law" govern the universe as a whole? Can we find a sense in which a non-ergodic Universe expands its total dimensionality, or total "work space" in a sustainable way as fast as it can? In short, can we find a sense in which the Universe becomes as complex as it can, as diverse as it can, breaks as many symmetries as it can? Brave, foolish vision. I expand on it below, encouraged in part by the fact that Lee Smolin is exploring similar ideas.

The second beginning intuition also has its basis in Smolin's work. Attempts to quantize general relativity, he says, lead to a recovery of "spinnor-networks" first discovered by Penrose. Such networks can be conceived of as occurring at the Planck scale, and forming complex knotted structures. These knotted structures, in turn, can be thought of as comprising space itself.

I will suggest below that such knotted structures are combinatorial objects rather like molecules and symbol strings in grammar models. In principle, such systems can become "collectively autocatalytic," knots acting on knots to create knots in rich coupled cycles not unlike a metabolism.

If a natural sense can be found, as presumably it can, for energy and work cycles, such knotted structures might, just possibly, be thought of as "Autonomous Agents" coevolving with one another. In such a vision, space and energy density, hence matter, would all be governed not only by familiar concepts, but the Darwinian categories of agency, functional closure, propagation of work-record with modifications, historical contingency.

In fact, spinnor networks can include space, and matter in the form of links representing "spinons": electrons, quarks, neutrinos, as well as gauge bosons: photons, gluons, and W bosons. Among the rules of the spinnor nets, space can convert to matter, e.g. with pair creation, and matter can convert to space. This suggests the general possibility that the Universe of space and matter coevolves non-ergodically into the adjacent possible such that the growth maximizes the total dimensionality of the Universe, space plus simple matter, plus combinatorially complex matter, simultaneously.

The connecting concept will be that those pathways into the adjacent possible along which the adjacent possible grows the fastest will simultaneously be the most complex and most readily lead to quantum decoherence, and classicity. If complexity "breeds" classicity, then the Universe may follow a path that maximizes complexity. The dream would be that, with the exchange of space and matter involved in a full theory, maximum complexity would correspond to a Universe poised between expanding and contracting.

The concepts just noted, and discussed below are being explored with Lee Smolin.

The unfolding of such a Universe, like a biosphere, would be context dependent and build upon an analogue of exaptations, ways of "crystallizing classicity" that propagated the best. There is here a hint of random laws in Holgar Nielson's hope, or "It from Bit" in John Wheeler's hope.

7.0.2) This seventh lecture is organized as follows:

i. I worry about the concept of entropy, and introduce the idea that the observer as an autonomous Agent able to have come into and to be in existence with the observed, needs to be explored.

ii. I explore the question of whether large quasi-closed thermodynamic systems such as isolated spiral galaxies can be chemically supracritical and hence not reach equilibrium for much longer times scales that the lifetime of the Universe. Since "glasses" can take very long times to equilibrate, this is perhaps not such an odd notion. Since life on our planet seems to exhibit this property are we are members of a spiral galaxy, perhaps this concept is correct.

iii. I sketch Smolin's ideas about general relativity, quantum gravity as spinnor nets, maximum variety, and spatial distance based on similarity of the web topology, then describe a toy world which should yield autocatalytic knot systems. Just as species may diversify but, due to cladistic lineages, remain similar at higher taxonomic levels, it may be the case that in such an autocatalytic "ravel" web, diversity grows, but shows historically contingent slowing in the emergence of variety. If so, then the growth of "distance" in space should slow down as the ravel itself grows. This is hoped ultimately to bear on the fact that the expansion of the Universe is slowing at a critical rate as if the Universe is poised between expansion and ultimate contraction.

iv. I raise the issue of "decoherence" and the emergence of "classicity." Hartle and Gell-Mann seek an account in terms of decohering histories of the Universe. Zurek seeks an account in terms of loss of phase information when a quantum system interacts with a complex environment. I wonder aloud whether two quantum systems that are modestly complex can provide a "complex environment" for one another such that jointly they interact and induce decoherence. Here the core idea is that when complex quantum systems interact, the number of possible outcomes is, in some sense, greater than when simpler quantum systems interact. In addition, the number of possible forerunner situations leading to the interaction is also greater. This parallels the general observation that complex combinatorial entities show an increase in the number of transformations compared to the number of kinds of things as combinatorial complexity increases. If this can be true, then phase information can perhaps be "lost" more readily when complex quantum objects interact than when simple quantum objects interact. If so, then complexity begets classicity, which typically irreversibly "locks-in," and the system propagates from that locked in configuration. Hence complexity begets classicity which locks-in the present complexity and increased complexity is "favored" to decohere and lock-in even more readily for the same "selective" reason. This would create a "selection principle" tending to marshal the initially superposable linear quantum flow toward higher complexity entities.

v. I mention Lee Smolin's version of the above vision: Among the quantum histories of the entire Universe that decohere in the Gell-Mann - Hartle sense, those for which the Adjacent Possible explodes the fastest will most readily decohere. Thus, the Universe should tend to "go classical" along trajectories that maximize complexity. Again this hints at a "selection principle" tending to marshal the initially superposable linear quantum flow toward higher complexity entities.

vi. Finally, I wonder whether it might be possible to regard the unfolding Universe as a persistent superposition of all the values of the constants of whatever fundamental model. Just as a linear chemical reaction-diffusion system can amplify a specific fastest growing linear mode, hence pluck a growing pattern, suppose it is possible to picture alternative values of the 17 constants plus relativity as creating different Universes, some of which amplify faster than others. Rather than picturing a Universe with many different "phases," each a pure set of values of the constants, where "we" are in a region that amplified the fastest, it might also be possible to imagine the persistent superposition, but persistent selection for those values of the constants which yielded particles, masses and interactions that allowed the fastest growing, most complex and diversified Universe to emerge. This would require an analogue of functional closure in which slight variations in the values of the constants for interacting particles could progressively "select" better matches among the autocatalytic collectivity of processes, rather like self-tuning of antibody molecules to one another in anti-ideotypic immune networks. Again, decoherence and classical lock-in of complex systems over simple systems might favor the lock-in of amplifying "modes" of higher rather than lesser complexity.

And perhaps the arrow of time is nothing but the autocatalytic expansion of the ravel, where the poised supracritical behavior breaks the fundamental time or cpt symmetry.


7.1.1) Consider a gas at normal temperature and pressure in a litre box. We construct the familiar 6N dimensional phase space with bounded values for positions and bounded values for momenta. In this compact phase space we define microstates and macrostates. The latter correspond to our chosen coarse graining of the microstates.

i. Note that here we know ahead of time the entire 6N dimensional phase space.

ii. Based on the ergodic hypothesis, we carry out the standard statistical mechanics procedures.

iii. Note, as did Dan Stein, that the system explores its phase space such that, vastly long before the Poincare' recurrence time of the system, it has "sampled" typical regions all over phase space.

This rapid sampling reflects the physical fact that any gas molecule can move from one side of the box to the other side rapidly, and can alter its momentum between extreme values rapidly.

And, as noted above, the fluctuations away from equilibrium soon dissipate and have no long term macroscopic consequences.

iv. Now consider the supracritical expansion of the non-equilibrium biosphere into the Adjacent Possible at the level of complex molecules. -- I return below to suggest that a spiral galaxy such as ours might, nevertheless, be considered a quasi-closed thermodynamic system, and our biosphere occurs in such a galaxy.--

a. We have already suggested, based on grammar models of organic chemistry, that the supracritical explosion into the Adjacent Possible can suffer the Halting Problem and perhaps decidability problems.

1. Explicitly, given the organic molecules on the earth, and a specification of an arbitrary complex organic molecule, X, and full knowledge of quantum chemistry, it may be impossible to state that X can be derived from the extant Actual molecules by legitimate organic reactions.

2. But if the supracritical explosion of molecular diversity can suffer the halting and decidability problems, then we cannot state ahead of time what the "accessible" phase space will be. We have to wait and see.

3. If we cannot define the accessible phase space ahead of time, then it is not clear how to define Entropy of the system ahead of time.

--For example, we cannot assign a probability measure for the occurrence of an event whose occurrence suffers the Halting problem.--

4. The difficulty defining entropy in part reexpresses the fact that the Universe is Kinetically trapped in an historically contingent subspace, the Actual, of the Possible.

--That is, suppose that one could actually state a 6N dimensional phase space for the entire Universe. Nevertheless, the Universe is non-ergodic within that phase space over many times the current lifetime of the Universe. Thus, we can only talk about the "actual" and the "adjacent possible" as seen along non-ergodic trajectories. The unfolding adjacent possible is just the "accessible" phase space.

* But the Adjacent Possible seems fully describable - no halting or undecidability problems. Hence one can "normalize" regular probability distributions over the Adjacent Possible.

7.1.2) Need for a new concept based on Agency:

i. The entropy concept - from Carnot to Boltzman to Shannon, does not distinguish between Autonomous Agents propagating Work-cum-Record in a proliferating diversify ecosystem generating ongoing macroscopic correlations from ANY OTHER arbitrary odd arrangement of matter.

ii. There is only "configuration" in Entropy, only syntax, only the probability or improbability of the arrangement of atoms in position and momentum spaces. Entropy offers no account of semantics, organization, the propagation of Work-Cum-Record, or the generation thereby of correlation.

iii. We need a new concept uniting matter-energy and information bearing on the creating and propagation of correlations in systems that advance from the Actual to the Adjacent Possible.

Whatever the proper formulation of that concept, it is exemplified by the ecosystems of the Biosphere, propagating Work-Cum-Record in the Autonomous Agents coevolving with one another, playing natural games with one another, increasing in molecular species, organismic species, and niches diversity, creating, generating, and propagating Organized hierarchical complexity A natural framework could be based on endogenizing the observer within an expanded thermodynamics. That is, rather than imposing a "coarse graining" exogenously, stated in terms of human observers speaking in natural human language and typically based on our mathematical concepts, we might attempt to base a theory on Coevolving Communities of Autonomous Agents mutually making a world.

Consider a microbial community and let the bacteria and other protists be our autonomous agents mutually making a world with one another and the abiotic environment.

Consider framing the issue of the information exchanged between these agents. Based on our presumptive general attractor, the agents will each be in the dynamical ordered regime, and the community will be at the dynamical phase transition between order and chaos, such that the nearby dynamical trajectories within the community will be nearly parallel, while trajectories within agents will be slightly convergent. Agents will thereby maximize the discriminations they can make and act upon without trembling hands, in order to "play" the complex natural games with one another requisite for the world that they are, thereby, mutually constructing. This categorization by each agent constitutes its endogenous coarse graining of its world. The variables exchanged between agents are what each agent "learns" about its neighbors.

In order for each agent to act reliably, it must "gate" the rate of arrival of exogenous inputs such that its own internal dynamics is not too severely perturbed. In particular, according to our candidate law, each agent must avoid being in the chaotic regime.

Then the agents will have to, and do, tune the "channel capacity" to other agents. There will be a critical surface in the community, as a function of internal position of each Agent's order-chaos axis, richness of coupling - channel capacity - with each other agent, and total number of other agents that each agent is coupled to, such that the surface separates the community order-chaos regimes.

There will be trade-offs along that surface. If each agent lies deeper in the ordered regime, it can afford higher channel capacity. But then each agent will make fewer delicate discriminations, play less complex games, and that agent may be less fit than one that made finer discriminations, but paid for them by tuning down its channel capacity.

Presumably, some optimum configuration exists on the critical surface, attained by natural selection acting on individual agents. Each agent would be locally doing its best. The community, presumably would be poised in the self-organized coevolutionary sense, and poised in the self-organized advance into the adjacent possible.

At that poised state, maximizing the sustained advance into the adjacent possible, or expansion of the community's "work space," we can ask what the mean channel capacity between the agents is. That optimum channel capacity now is part of the coarse graining each agent makes of its world, and gives the natural information exchange flow between agents under the self constructed circumstance that the entire community expands its work space as rapidly as it can manage.

In order to hope to use such an analysis as a basic building block of a physical theory, we presumably would need to be able to extend the concept of Agents, and coevolutionarily constructable communities of Agents from the biosphere to wide ranges of phenomena in the Universe. The most far-reaching image would be that the analysis could be extended to the Universe as a whole, a vision of the universe as a self-constructing ecology of autonomous agents. Below I broach what I hope are at least possible ways of thinking about such an issue.

7.1.3) A conjecture: In the expansion from Actual to Adjacent Possible, for example, in the biosphere, we may typically occupy an ever smaller portion of an ever expanding phase space. Thus, if the "work space" available at the next step is, on average, increasing, then wherever "we are" it is a smaller fraction of the total "dimensionality" along our specific non-ergodic trajectory.

But we coevolving agents mutually know, by embodied knowledge in propagating work-cum-record, one another. We agents are correlated because we literally have coconstructed our world together.

Then, if we occupy an ever smaller portion of an ever expanding phase space, and "we know we are in this world" because we are correlated by virtue of expanding from actual into adjacent possible - WE ARE EVER MORE ASTONISHINGLY LOCALIZED TO THIS UNIQUE HISTORY OF THE UNIVERSE. So with respect to us coevolving Agents, entropy seems to go down.

There are two issues here: First, if the "work space" is expanding then to an outside observer the system is ever more astonishingly localized to a unique history of the universe, hence the entropy of the system seems to go down. Second, there are no outside observers. If it is possible to find a framework that endogenizes the observers who supply the coarse graining as agents who coconstruct the universe, then "we know" because we have constructed the universe together.

If "we know" we occupy this unique history of the Universe due to our lives and historical unfolding, the reduction in uncertainty that we are in THIS particular historical unfolding, characterized by the historically contingent frozen and propagating accidents, is enormous. Is this reduction enough to balance the increase in entropy from other processes in the history of the Universe? It is unclear. For here we occupy an ever smaller region of an ever EXPANDING effective phase space along some trajectory in some total 6N phase space of the Universe.

Given the difficulty defining entropy in the ongoing historically contingent expansion into the adjacent possible, and even ignoring the effects of gravitation on the total entropy of a system where entropy, as standardly understood need not increase (cf. Smolin), how do we know the total entropy of the Universe is increasing?

And perhaps the total change in the entropy of the Universe is zero? Perhaps the Total Entropy of the Universe is Constant?

The Universe is supposed to be running down to its "heat death." But the burgeoning upward in hierarchical complexity carried out by coevolving Autonomous Agents, molecular or otherwise, rushing from the Actual over the real frontier to the Adjacent Possible, appears to be the Universe Running Up.


i. Familiar concept of a thermodynamically closed chemical reaction system considers a fixed number of kinds of chemical species and -- in general -- reversible reactions among them.

Simplest example: A <-> B. Macroscopic equilibrium is that concentration of A and of B such that, on average, forward and reverse reactions occur at the same rate.

In closed system with fixed and "modest" number of kinds of chemical species, and under detailed balance, system goes to an equilibrium.

ii. Consider a thermodynamically quasi-closed but supracritical chemical system with a very large total mass and a reasonable diversity of types of atoms.

The closest approximation to such nearly closed systems in the Universe, short of the entire universe, may be isolated galaxies or perhaps clusters of galaxies. A spiral galaxy sustains star formation from the giant cold molecular clouds. (cf. Lee Smolin The Life of the Cosmos, in press, Oxford University Press.) The molecular clouds each have on the order of ten to a hundred million solar masses. The clouds are chemically complex and rich in carbon based chemistry. In part the clouds are formed and replenished by supernovae. In turn, shock waves from the supernovae probably precipitate star formation within a nearby cloud, and partially heat and disperse the same cloud, leading to propagating star formation triggered successively in adjacent clouds.

iii. For the moment, ignore the effects of gravity in this large quasi closed thermodynamic reaction system.

iv. Consider the total diversity of kinds of molecules that might be formed from the total mass in the clouds in the galaxy. If the number of atoms in such clouds were, say 10 to the 40th, then we can consider all possible bonded structures that are consistent with quantum chemistry and the temperature and pressure in the closed clouds. Given a lower bound on the rate of a chemical reaction - say a femtosecond - then the clouds would not have time to carry out each of the possible reactions once in the current lifetime of the Universe.

v. To use here an example from above - the number of proteins length 200 is 20 to the 200 = 10 to the 260. If there are 10 to the 80 particles in the known Universe, and if pairwise interactions can happen on a time scale of femtoseconds - far too fast for chemical reactions - then 10 to the 193 such pairwise interactions can have happened since the Big Bang This is vastly less than the 10 to 260 types of proteins length 200. Thus, even limiting discussion to the complexity of all possible kinds of proteins of a fixed length, the Universe as a whole has not had time to create one of each. A forteori, the quasi-closed molecular clouds in an isolated spiral galaxy have not had time to form each possible protein once.

vi. Consider, at any moment, the types of molecules actually present in the closed clouds as the "Actual." Consider the chemically Adjacent Possible accessible from the Actual in a single reaction step. Since the substrates are present in finite concentration in the Actual, and the products in the Adjacent Possible are at zero concentration in the closed clouds, the equilibrium across each such reaction couple is displaced "to the left," creating a chemical potential toward the products. Thus, reactions will persistently tend to flow into the adjacent possible.

The statistical tendency to flow into the adjacent possible will remain true even if there is only a single copy of each substrate in the clouds, although the kinetics will depend upon whether the reaction is one substrate - one product, or many substrates are required.

Bonds forming molecules have stabilities that depend upon the environment. Thus, cold molecular clouds support a high chemical diversity that would not endure in the interior of stars. Breakdown of complex molecules may limit the rough maximum size of molecules within the system. Nevertheless, the total potential molecular diversity within that bound may still not be explorable within the lifetime of the Universe.

vii. Thus, such a closed system need not reach equilibrium in terms of the diversity of kinds of molecules within it in the lifetime of the Universe. Indeed, the Earth is a member of such a galaxy and, as argued below, has presumably not reached equilibrium and remains chemically non-ergodic. If this is true of the Earth, then it is necessarily true of the galaxy of which it is a part.

viii. Unlike subcritical and closed thermodynamic systems with a modest diversity of kinds of molecular species that can reach equilibrium rapidly, in such a supracritical closed thermodynamic system, the system can remains non-ergodic over the lifetime of the Universe. Therefore, single events, such as the formation of a specific organic molecule rather than another, can have large scale cascading consequences for the kinds of molecules which shall form within the system. (In Origins of Order I called such propagating systems "jets.")

These cascading consequences of single events are "frozen accidents" and are macroscopic consequences for the entire system.

This is entirely unlike the square root N fluctuations in normal subcritical closed equilibrium systems where, due to dissipation of the fluctuation, no macroscopic consequences propagate.

Thus large mass supracritical chemical systems can be non-ergodic, and - ignoring gravity - can remain non-equilibrium over time scales long with respect to the lifetime of the Universe.

ix. Gravity may enter the behavior of such supracritical closed reaction systems in at least two ways:

a. To be supracritical and to have a time scale to reach equilibrium that is long on the scale of the lifetime of the Universe, given a lower and upper bound on the rates of reactions, there must be an adequate total mass of diverse atoms so that the combinatorial number of possible kinds of molecules is large enough that the time to equilibrium is long. Such a mass is, if isolated, self gravitating, hence there is a process and a characteristic time scale of gravitational collapse perhaps into stars which can destroy most of the chemical complexity.

b. To have a lower bound on the rates of reactions, the atoms must be causally connected. Thus, they must not forever move apart in space faster than they can react. Gravitation can provide a means to keep such atoms interacting with one another over time scales long with respect to the history of the Universe. For example, galaxies can have histories which are on nearly the same scale as the Universe as a whole, and will persist for many times the current lifetime of the Universe.

c. But further, the mere condition that a supracritical closed chemical reaction system be non-ergodic on time scales long with respect to the lifetime of the Universe does not select strongly among such conditions. For example, the atoms might move apart at a rate that remains causally connected, but ever more slowly, such that the resulting supracritical system would remain non-ergodic over many lifetimes of the Universe, but generate a very low diversity of molecular species.

d. Conversely, the character of supracritical systems, as noted just below with respect to the biosphere, is that such a system can generate very high molecular diversity over 4 billion years and that it remains non-ergodic. The atoms on this planet are certainly bound together by gravity.

Thus: If our galaxy can be considered a closed system, then perhaps we should seek a maximizing principle - in some sense a closed system of sufficient mass and atomic diversity seeks an arrangement of matter - energy - organization such that the rate of growth of molecular diversity is, in some averaged sense, maximized. This, of course, may be occurring for the biosphere.

x. This raises the interesting question of what the gravitational force must be, and the mass and chemical diversity of giant molecular clouds must be, as well as the constants of the Standard Model (cf. Smolin again), such that an isolated galaxy with cold giant molecular cloud can persist and remain non-ergodic over time scales long with respect to the lifetime of the Universe and perhaps maximize diversity.

As Smolin notes, the constants of the Standard Model must be chosen carefully such that stars and chemistry can exist, and we live in an extremely complex Universe.

xi. We might wonder whether the very fact that cold giant molecular clouds can probably remain supracritical, non-ergodic, and perhaps maximize molecular complexity within the clouds or the planets born of their stars over the lifetime of this Universe express something essential about how the choice of the constants in the Standard Model were made.

xii. Finally, I note that it might be fruitful to ask whether a spiral galaxy can be analyzed as a community of coevolving autonomous agents, where each agent involves a combination of complex chemistry forming cold molecular clouds, star formation, fusion reactions, shock waves from supernovae and other processes, whereby autocatalytic systems capable of work cycles and alternative behaviors, hence alternative actions, can coevolve.


I aim here only to mention some beginning analogies to the work of Lee Smolin. The ideas are very tentative.

i. Smolin points out, first of all, that given the Standard Model, and 17 parameters that must be chosen to fix the masses of particles, certain forces, etc., the chance that a Universe would be complex enough to have carbon chemistry and stars is 10 to the - 229.

ii. Either some fundamental theory will fix all these parameters, or we must wonder why we find ourselves in a very complex Universe with structure on all scales.

iii. Thus, perhaps a process of self organization plays a role in the structure of the Universe, including picking the constants.

iv. Among possible hypotheses, Smolin advances the concept that black holes are the loci of birth of baby Universes. He imagines that minor "heritable" changes in the constants of the Standard Model arise at each birth. Thus, cosmological natural selection will favor those Universes that maximize the number of black holes they create.

v. Therefore, there is an historical process that has tuned the constants to approximately optimal values.

vi. Smolin notes several further interesting points. General Relativity requires that each point in space be different from any other. The night sky must look different from each point, since each point needs to have a "location," but space is to be considered in General Relativity as characterized by the relations between masses or energy densities. Thus, he argues, General Relativity seems to demand that the Universe be maximally diverse.

There is at least a loose analogy with the concept that the biosphere is growing into the adjacent possible such that its dimensionality is increasing as fast as possible. And there is at least a loose analogy between the idea that observers at any point in the Universe must see a different night sky and the idea that situated autonomous agents mutually know one another in indefinitely context dependent ways because the agents have mutually created the "world" in which they propagate.

7.3.1) Coevolving Spinnor-Agents and a construction of space

There is another possibility of analogy. Smolin and colleagues have found a way to quantize general relativity that leads to spinnor networks. He is hoping that such networks can constitute a kind of prespace on the Planck scale. And he is considering allowing spinnor graph structures of dots and lines, governed by modestly simple rules, to grow and connect. Such networks will, of course, make all kinds of knot topologies. Then one concept is that nearby locations in "space" are created by similarities in the knot structure seen from different dots or edges.

This seems intriguing because knotted graph structures, like molecules, are combinatorial objects. Just as a sufficient diversity of molecules can cross a phase transition and become collectively autocatalytic, one can conceive of knotted structures acting on one another and becoming collectively autocatalytic.

Indeed, knotted structures created of spinnor nets might constitute coevolving autonomous Agents.

7.3.2) A toy example follows:

Consider loops, simple knots such as trefoils, and more complex knots. Allow two loops to join to form a longer loop. Allow a longer loop to cleave to two shorter loops. Then the transformations among the loops begins to look like chemistry.

For example, a + trefoil might unite with a - trefoil to make a longer, unknotted loop. Knot +A might join knot +B to make knot +C, which then joins with knot +D to make a new knot +E. +E might cleave to yield +A and +F and +G. Here +A has acted like an enzyme, abetting the formation of +F and +G products from +B and +C substrates.

As with the combinatorial objects of chemistry, so with the combinatorial knots, one expects that as the complexity and diversity of the knots increase, the numbers of transformations among them will grow in diversity even more rapidly. Thus, once a system of knots starts to ravel itself into complex knots, the Adjacent Possible knots explodes supracritically.

Granted conservation of handedness where + and - trefoils and their generalizations annihilate, one can imagine a spontaneous symmetry breaking. Once a +knot system has fluctuated into existence and become supracritical, a runaway explosion of supracritical behavior to a + ravel should occur.

Thus, this expansion could create a ravel knot structure of very high complexity and diversity.

7.3.3) If there is an interpretation involving work cycles, then this ravel might, like an ecosystem, be comprised of autonomous agents with functional organization due to "task closure" and with the persistent know-how to generate and propagate organization. Each point in "space" could be different, each point could "know" about other points in a context dependent way.

If the general "4th law" held, then the community of spinnor-agents would expand their work-space as fast as sustainably possible.

But this expansion is nothing but the increasing diversity of the knotted ravel - for it is the very diversity that creates ever new ways to knot and ravel, hence transformations and niches.

There are several interesting features to this image:

i. If the spinnor ravel is a community of autonomous agents on the Plank scale, and granted that one can define surfaces and volumes, perhaps as Smolin is trying to do, then one can attempt to define the entropy within any "Agent-volume" as the information that can optimally be transmitted over its surface such that the "work space" of the community expands as fast as is sustainably possible.

This is at least loosely coupled with the concepts, as Smolin noted to me, of the entropy of a black hole being proportional to its surface, and to concerns that one should attempt to define entropy as the information one region of space can have about another region of space - cf. Smolin.

ii. Similarity of local knot structure is supposed to map into "nearby" in space. Smolin has explored various definitions of "variety" in a graph. Typical definitions are based on asking how far from a given node in the graph one must "look" to uniquely distinguish that node from any other node. Here the "context" of a node is the connection structure to other nodes in the graph. The variety of the graph is, roughly, defined as the reciprocal of the maximum distance away from any node that must be considered to uniquely characterize it.

Now consider the similarity of species in the biosphere. While total diversity has increased dramatically, species are organized into higher taxa, species, genera, families, orders, classes, phyla, and kingdoms. Why? Because an historical unfolding process of reproducing entities subject to heritable variation naturally gives rise to such a hierarchy. --See, for example, Origins of Order on the Cambrian explosion.

But then the total variety in the biosphere grows ever more slowly. No new phyla have been established since the Cambrian. Evolution creates massive differences early in the historical process, then settles into fiddling with details.

If there is a mapping to ravel-agents, then the total variety in the ravel will grow ever more slowly.

But this would correspond to a slowing of the rate of expansion of space.

iii. Thus, one might hope that such a theory would ultimately yield an account of the rate of expansion of space. Indeed, the real hope is that it might part of a deeper fundamental theory such that the Universe constructs itself to the precise phase transition between expansion and contraction.

iv. Therefore, as an exercise in imagination, we may ask, in principle, whether there can be a coupling such that space and mass interconvert. In general relativity, the curvature of space is associated with an energy density, hence implicitly with mass.

7.3.4) If increasing local knottedness can map into increasing local mass, as is sometimes done in field theory, then space and mass become mutually creating. That would be nice. One could dream of a theory in which the ravel is both space and matter, co-created by spinnor-agent communities such that the entire ravel increases its "work-space" as rapidly as is sustainable.

In fact, in spinnor networks, space and matter can actually interconvert. Such networks include graph elements representing space, graph elements representing spinnons such as electrons, quarks and neutrinos, and other graph elements representing gauge bosons such as photons, W bosons and gluons. Space and matter do interconvert in these theories.

I will suggest more concretely below that one analogy to a classical autonomous agent propagating via closed work tasks and catalytic closure, is the emergence of classicity itself freezing in specific classical structures that in turn helps "freeze" in further specific structures out of the quantum behavior of the underlying system. The freezing in of specific oriented structures abetting the further freezing in of yet further specific oriented structures is at least analogous to propagating work tasks and their closure in autonomous agents. If so, crystallization of classicity and its propagation is like crystallization and propagation of autonomous agents building an ecosystem.

In such a vision, the Universe chooses "locally" at each moment how to allocate spinnor-agent propagating work-cum-record into space (i.e., geometry), and into simple and complex matter precisely by how the fast the adjacent possible grows. The "expansion of the universe" goes into creating new variety - hence geometry + the matter in its knotted structure. Hopefully, in such a theory, the total Universe propagates to the vicinity of the transition between expanding forever and collapsing, for maximum diversity and complexity - hence the maximum total dimensionality of the Universe, occurs along this boundary.

7.3.5) Even if the underlying graph rules were reversible, (P symmetric) a supracritical explosion in one direction might break that symmetry.

In the toy world above, if the system breaks symmetry toward + knots and a symmetry in the generation of knots from a vacuum creates an equal number of the simplest + and -knots, then:

i. either there must be a way to "get rid of" the simple - knots,

ii. or the + knots in raveling must construct a ravel so readily as its complexity increases that construction forever outpaces the unraveling due to interactions with the simplest - knots. This would be aided if the simplest - knots could only interact with a few types of the + knots.

(--Just for fun - only one handedness of neutrinos exist, might they be the simplest - knots while the simplest + knots are all tied up in the ravel? Neutrinos do not interact with matter enough to stop propagation through the earth.

Further, if any simplest + knot, the other handed neutrino, were knocked free of the ravel, it would presumably be so readily reknotted to the + ravel that it might not be detectable as a separate entity. More generally, if mass is associated with complex + knots, this seems to predict a different in the lifetimes of "free" + versus "free" - knots of each topology. The + knots should have a shorter free lifetime. Any evidence with respect to, e.g. P symmetry or elsewhere? --) The expansion of the ravel into the Adjacent Possible might then constitute an Arrow of Time. All events would be unfolding behaviors of the ravel. This is closely related to Smolin's concepts.

7.3.6) One would like a theory in which the constants do not need to be specified, but the values emerge from the dynamics itself. It would be nice were there a way to allow all possible values of the constants to specify all possible Universes, and compete, perhaps not via baby Universes, but either as distinct phases of an early universe, or somehow like competing eigen-function modes in linear analysis where the fastest growing mode wins. Thus, it would be nice to find a way to have the fastest growing Universe and the values of the constants that enable it, to jointly "win." Job and jobholder jointly cocreating, whether econosphere, biosphere, or cosmosphere.

In order to try to even imagine what such a theory could look like, I want first to ask a different question: Is there a relation between complexity and classicity?


Bohm and Hiley have summarized their work reinterpreting quantum mechanics in "The Undivided Universe." Most physicists seem to agree that the interpretation is consistent with quantum mechanics, but hard to use. I will base the tentative suggestions below on their formulation.

7.4.1) I need to summarize their main points and do so below.

i. Active vs. inactive information carried in Quantum Potential.

ii. Wholes, i.e. quantum potential is in configuration space of all the particles in many body system, hence is not given by predefined relation between the particles. This is seen as ontological version of "unanalyzable whole" of Bohr.

iii. Barrier penetration due to V + Q interacting in complex ways.

iv. Formation of many different "channels," with gradual accumulation of entire quantum potential into one of these, that now carries both the particle and the active information, as the other "empty" channels scatter off other particles in the environment, so that the capacity to reunite ALL the information in all the channels dwindles to zero. At that point, classicity has emerged.

v. This narrowing down of the wave function due to such information loss is ontological interpretation of "collapse" of wave function, but no collapse has occurred. Also it is an interpretation of measurement, but measurement induces a correlation between particle and apparatus, so is not a measurement of the particle alone.

vi. The narrowing down of wave packet and active information into the classical is matched by the spreading out of the wave packet within the active channel.

vii. Bohm and Hiley state that the convergence of wave packet in active channel and the divergence within the channel exactly compensate for one another - (shades of every other "edge of chaos" image above in eukaryotic cells and proposed "law," where community of agents is organized such that dynamical flow in the classical community is nearly parallel, and that invasion of the adjacent possible is also nearly SOC parallel.)

viii. Non-locality - seen even in two particle system such as chemical bond.

7.4.2) Note that classicity is also "independence" of the wave function of the two uncorrelated entities. So if they are now uncorrelated, then scattering of coherent quantum with them induces still further decoherence. Classicity is AUTOCATALYTIC. More classicity, the more readily the interaction of quantum events, e.g. in measuring situation, gets converted into classical events by "collapse of wave function" on von Neumann's interpretation.

i. Thus, DeWitt (Bohm and Hiley -BH) in his version of the many world interpretation, posits COMPLEXITY - entities that are sufficiently complex that, in interaction with quantum events, they lead to the choice of which world happens, i.e. the spin is up or it is down in this universe, not a superposition of both.

ii. For DeWitt, this means that complexity - e.g. complex solids, organic molecules, metals, etc. lead to the choice.

iii. (But as B and H note, DeWitt needs a theory for how complexity tunes itself just so, such that the apparent classicity emergence at just the right level away from fully quantum level where interference is seen.)

iv. But to appeal to complexity may mean, as I am about to say, that if the "complexity" of the "entity" with which the quantum interacts is ever larger, then the ease of "collapse of the wave function," or the ease of which choice among the superposition of quantum possibilities is made, becomes EASIER.

v. Another way of saying this is that, in the amplification process after the "measurement interaction" between quantum and apparatus' wave functions - the more ramifications into alternative BH channels occur then the harder it is to reassemble ALL the inactive information in these channels in order to re-achieve coherence and interference. In short, the more "rapidly" the possibilities ramify in the amplification step, the easier it is to "lose" the inactive information "irreversibly." Now what we need, presumably, might be something like the following:

i. Classicity is autocatalytic.

ii. Those directions in the Adjacent Possible where a foray creates a maximally larger next Adjacent Possible, means that the system, in choosing ONE of these possibilities, is making an ever more refined choice - i.e. one among an ever larger number of adjacent possibilities. Then, at each such step, the branching directions of ramification and hence of irreversible loss of inactive information becomes ever easier.

iii. Then, if one can get the adjacent possible "work space" to expand in the non-ergodic flow in the total phase space --6N classically--one will have the result that the fastest direction of expansion is always that direction into the adjacent possible which expands the adjacent possible's range of next choices as fast as possible, for it is this direction that classicity emerges fastest. Thereafter, the classicity would lock-in the current state, and the wave function would propagate further, again "crystallizing" via decoherence in the direction of increased complexity.

iv. To do this, one presumably must show that it makes sense to define an expanding adjacent possible from the actual in the Platonic phase space of all possible quantum histories of the universe, then show that complexity breeds classicity autocatalytically.

v. On a small scale, using organic molecules, it might be possible to exhibit that complexity begets classicity. I state the idea first, then show a crude calculation below: Do quantum interactions among more complex rather than less complex organic molecules enhance the probability of decoherence, hence classicity?

Physicists are comfortable saying that a single moderately complex organic molecule can be in a quantum state, rather than be a classical object.

i. Consider, then two such organic molecules.

ii. Suppose, in general, that the more complex those organic molecules are, the more different types of reactions the two molecules can undergo. (As a concrete example, two peptides length N can undergo (N-1) squared transpeptidation reactions.

iii. Let the two organic molecules have wave functions that interact such that in the resulting "entangled" state, all of the reactions the two might undergo are "simultaneously possible." Here, by simultaneously, I mean that the mixed wave function could, on the BH interpretation, be guided by the total quantum potential into any one of the many alternative reaction pathways.

iv. It follows, that as the organic molecules each become more complex, the adjacent possible they locally invade becomes larger, with more alternative branches, thus on the BH interpretation, more inactive information is lost in the other channels, eventually irreversibly.

This follows because on the BH interpretation of QM, particles have definite positions and behave deterministically with respect to the many body classical potential and many body quantum potential - where the latter cannot be given by predefined relations among the particles. Then the probability per channel arises because of differences in initial conditions, the probability per channel turns out to be the standard QM probability of "measuring that outcome" as the amplitude squared of that coefficient of the total wave function. Hence, the total probabilities must sum to 1.0 over all possible outcomes.

* Thus, on average, as the number of other channels increase, the probability of any given channel must decrease. In turn, this implies that as the number of channels into the adjacent possible increases due to the increasing complexity of the molecules, the ease of emergence of classicity increases. In more detail, consider a chemical reaction soup, in which founder molecules are present. Imagine that in any region of space-time two, three, or more organic molecules can interact such that their wave functions become entangled - (presumably this is less and less likely as the number of interacting molecules increases). Then consider all the entangled wave functions - ALL THE POSSIBLE REACTIONS, PAIRWISE AND HIGHER ORDER, AMONG THE MOLECULES ARE NOW POSSIBLE. So the total diversity of the Actual governs, via such pairwise and higher order interactions, the number of directions into the adjacent possible.


(A crude calculation supporting this is below.)

ii. If so, then there will always be a tendency to propagate locally into the Adjacent Possible that increases the total number of emergent pathways!

iii. But in turn, the classicity will 'lock-in' the complex state, and the wave function will propagate from that locked-in state.

iv. Hence there will always be a tendency to flow in the direction of increased local complexity, for that direction decoheres the fastest, thereby autocatalytically yielding a cycle in which complexity generates classicity that begets further complexity.

7.4.5) If this were true, and could be applied to the universe as a whole, we might have a selection principle that tends to pick preferred kinetic directions through the platonic phase space of all possible quantum histories of the universe: The entire universe should flow in that direction that maximally increases "channels" out from entangled events -- hence toward some form of maximum complexity. If there is a theory in which spinnor-agents coevolve, then of course, the hope is that a vast community of such Planck scale agents coconstruct a Darwinian tangled bank that constitutes an unfolding and ever maximally sustainably complex Universe.

7.4.6) A crude calculation re organic molecules supporting the above:

i. Complexity -> classicity as MORE outdegrees.

a. Carried out crude calculation at hotel, assuming a fixed number of 1 and 0 particles in a fixed volume under ISOTHERMAL conditions by contact with isothermal reservoir, and compared what happens with interactions among particles or among molecules with N = 2, and N = 4 particles per molecule. Counted all the possibilities.

b. Consider monomers at particle number P in the box.

c. If all were converted to dimers, 00, 01, 10, 11, then the number of centers of mass of the dimers, in toto, would be 1/2P. Thus, the rate of pairwise collisions, if isothermal after going from monomers to dimers, will go as square of concentration, hence 1/4 the rate of collisions of Particles, P.

d. The key idea I am trying is that, for 10mers, say, the numbers of center of mass will be 1/10P, hence the rate of pairwise collisions under isothermal conditions will be 1/100 P. Then we want to compare the number of possible outcomes of pairwise collisions of two N = 10 entities, compared to the same for pairwise particles up to MN = 20 particles. But for 20 particles, the rate of encounters of 20 particles at a time in 3-space is [P] raised to the 20th power, which will be MUCH LESS than 1/100 [P] squared. So the 10mers will interact pairwise much much more often than will 20 particles.

e. Then we want to know the statistically weighted distribution of the expected number of outcomes for the pairwise 10mer interaction, versus the particle interactions. We must sum over all possible outcomes. For example, for the particles, 2,3,4,...20 particles might manage to come together into a single quantum event, (for it is the inactive channels from each single event that lead to classicity) and have certain numbers of possible outcomes. For the two 10mers, allowing all possible rearrangements of the particles in the two 10mers into all possible products, from a single 20mer to 20 monomers, we want to know the statistically weighted distribution of all the outcomes. Then we want to see if the two 10mers have a HIGHER MEAN NUMBER OF OUTCOMES PER UNIT TIME than the 20 "individual" particles when they interact.

f. In all cases, we should be able to consider "scattering" interactions among particles or among 2mers, 4mers, 10mers via their CENTERS OF MASS as similar in the number of possible outcomes - "colliding and bouncing off one another at various angles. For two 10mers, however, there is but one pairwise interaction among the two C of M. For 20 particles, the scattering interaction is much richer, and needs to be counted. And I have not done so, and it may screw up the conclusion I want to reach.

ii. Consider, then, two monomers, say 1 and 0, or 1 and 1 or 0 and 0. They can scatter, or form dimers, 00, 01, 10, 11. For the pairwise interaction of 0 and 0, or 1 and 1, there is but one outcome, the dimer. For the pairwise interaction of 1 and 0, the two outcomes are 01, and 10. The latter two will occur half the time if 1 and 0 particles are equal in number in the volume.

iii. Now consider the possible outcomes of the interactions among the four possible dimers, 00, 01,10,11. Entities up to length 4 can be formed, or down to monomers. Thus 00 and 00 can form 0000 or 000/0 or 0/000 (not distinguishable these two, but pathways of formation are different) or by exchange two new 00/00 dimers, or 0/0/0/0. The pair 00 and 01 can form 0001, 0100, 000/1, 0/100, 010/0, 001/0, 10/0/0, 00/0/1, 01/0/0, 0/0/0/1. Similarly 00 and 10 can form as many outcomes, as can 11 and 01 or 11 and 10. 11 and 11 is like 00 and 00. 01 and 10 can form 0110, 1001, 011/0,0/110, 100/1,010/1101/0,01/1/0,0/1/10,1/0/1/0. So if I've not missed anything, the total number of different outcomes is 57, and one must figure out the weighted expected number of outcomes given the particle number of 1 and 0 in the volume.

* For equal numbers of 1 and 0 particles, the weighted number of outcomes is 1/2 x 3 outcomes + 1/2 times about 10 outcomes or 13/2 = 6.5 outcomes. But the number of centers of mass of dimers is 1/2P, so the collision rate under isothermal conditions is 1/4 that for pairs of particles, P. So the expected "rate" of generating outcomes when pairs of dimers interact is 6.5/4 the rate of pairwise particle interactions. So, counting only pairwise interactions, there is a higher rate of QUANTUM EVENTS with more outcome possibilities among the interacting dimers than monomers.

We need then to count the 3 particle and 4 particle interactions among the monomers. The rate of the 3 particle interactions is [P] cubed, and 4 particle is [P] to the 4th. In toto, these can yield all the outcomes that dimer interactions can, but presumably, the rate is far lower due to cube and 4th order terms governing rates.

iv. Now, presumably, to be calculated properly, as the size of the molecules gets larger, the advantage of the complex entities gets even larger.

a. Consider the 16 tetramers, 0000, 0001,.....1111. The number of centers of mass is 1/4[P], hence the rate of pairwise collisions among the tetramers is 1/16 that for pairwise collisions among particles. And the scattering possibilities arise at 1/4th the rate.

b. Consider, without counting whether the particles be 1 or 0, merely the SIZES of molecules that can be formed by pairwise interaction among two tetramers, allowing all possible rearrangements:

8, 7/1, 6/2, 6/1/1, 5/3, 5/2/1, 5/1/1/1, 4/4, 4/3/1, 4/2/2, 4/2/1/1, 4/1/1/1, 3/3/2, 3/3/1/1, 3/2/1/2, 3/2/1/1/1, 3/1/1/1/1/1, 2/1/1/1/1/1/1,1/1/1/1/1/1/1/1. This is 19 size classes already.

Now consider the number of possible 8mers that can be formed by 1111 and 0000, counting all possible rearrangements of particles. It is the binomial 8!/4!4! = 70. So we need to count the number of outcomes for each size class, for a given pair of tetramers that interact. Thus, for the size classes 4 and 4, there are again 70 outcomes possible when 1111 and 0000 interact, counting all possible rearrangements of each of the two product tetramers 6 squared, plus 4 squared plus 4 squared plus 2.

If the average number of outcomes per size class were, say, 10, then the number of outcomes over all size classes would be 190. But the collisions happen 1/16th as often among tetramers under isothermal conditions as for pairs of particles, i.e. 1/16 [P]squared, while particle interactions of higher order, up to 8 particles at a time scale as [P] to the 2, 3, .....8. Again, we need the full calculation. But if the predominant interactions are pairwise, then the complex entities have 190/16 times the expected number of outcomes on a rate basis as do the pairwise particle outcomes. And of course 190 versus 1 or 2 outcomes plus scattering, on a per event basis.

So, the hope is that the full, proper, calculation will show that the expected number of outcomes per event, and divergence rate of cascading outcomes via events is, on average, higher when complex entities interact.

If correct, then that should yield that complexity begets classicity.

7.4.7) If true, then Narrowing wave function - active information with the rest lost elsewhere IS PROPAGATING WORK-CUM RECORD as classicity IS frozen out as reality, hence record, and also carries energy stored in the state. Spreading wave function from narrowed classical pathway continues the flow.

The crystallization of classicity, orienting specific particles in specific ways, helps orient the next particles in specific ways. (I am indebted to Phil Anderson for pointing this out.) Therefore, "seeds of classical structure" can build upon themselves and propagate outward. The successive "fixing" of specific orientations and other properties of now classical entities, one by another, is analogous to closure in a space of classical work-tasks and catalysis by which classical autonomous agents propagate to build a biosphere. In turn, if maximum growth of the adjacent possible renders the crystallization of classicity the easiest, then the propagating classicity should propagate towards those arrangements that maximize complexity and the growth of the adjacent possible.


7.5.1) Bohm and Hiley discuss a comparison of their approach to QM with others, Everett, DeWitt, Stapp, Deutsch, and Gell-Mann and Hartle.

i. I should stress the following point: For all of these approaches, there is NO ENDOGENOUS SELECTION PROCEDURE TO CHOOSE AMONG THE ALTERNATIVE HISTORIES OF THE UNIVERSE.

Thus, for Everett, there is a manifold many-mind splitting, where we need to know how the partial awarenesses of minds split off from one another.

For DeWitt, there is a literal splitting of the Universe, but the question of just when in the passage from the entangled state to its "flow" down different possibilities, the universe splits. Also for DeWitt we need a tunable complexity of entities such that the choices among quantum possibilities that then become classical emerge at just the right distance from quantum entangled states to fit what is observed in this universe.

In the Gell-Mann and Hartle approach, all possible histories of the universe are considered. A subset, defined by special initial conditions, can behave quasi-classically. IGUSES, information gathering and utilizing systems, can arise only in the quasi-classical domain, for only here can an IGUS make predictions.

In the Gell-Mann and Hartle approach, IGUSES are a step towards autonomous Agents, but Gell-Mann and Hartle have not considered functional closure, work cycles, and the requirements on a community of IGUSES such that they can, as coevolving Maxwell demons, actually assemble information about one another. In short, IGUSES must be Agents and be able to emerge.

7.5.2) Conversely, if complexity begets classicity autocatalytically and the concepts can be applied to the universe as a whole:


Lee Smolin's version of the above, given the Gell-Mann Hartle picture is that, among the decohering histories of the entire Universe, those that advance into the adjacent possible the "fastest" will decohere the most readily because as the complexity-diversity of the adjacent possible increases fastest along such pathways, phase coherence will be most readily lost.

If decoherence is only necessary, but not sufficient for classicity, as now seems true, then hopefully the growth of maximum complexity - space + simple + complex matter, will lead to classicity, including General Relativity and a Universe poised between expanding and contracting.

The fastest growing complexity might conceivably lead to a Universe poised between expanding forever and contracting: Were the early Universe to "make" too much matter compared to geometry - space, it would soon collapse, hence have remained simple. Conversely, were the Universe to "make" too much space compared to matter, it would expand forever but remain a simple cold soup of subatomic particles, hydrogen and helium. Just at the boundary between expansion and collapse the maximum complexity does, it is said, arise. Therefore, if the Universe propagates towards maximum complexity because that route crystallizes out into classicity, perhaps we can get that the allocation of the underlying spinnor networks of the transformation of space to matter and back also leads to a poised and maximally complex Universe.

7.5.3) All Possible Laws?

There are a few hints in this direction. Wheeler's "It from Bit" is one, "Random Laws" by Holgar Nielson is another. Branching Universes via Lindemann and via Smolin are others. The hope would be, not to find a fundamental theory that fixes all the values of the constants, but rather to find a self-organizing evolutionary process that somehow "chooses" the values of the constants, or even the laws, such that "our Universe" emerges.

i. An Autonomous Agent is a duality of "tasks" and "processes" or events whereby the processes and tasks mutually instantiate one another.

ii. Can one formulate a theory in which all possible values of the constants can exist, in a superimposed way, yet those combinations of values of the constants that lead, self-consistently, to the most rapidly expanding-complexifying Universe, "win" because they become locked-in via decoherence? The crude concept imagines a "fastest amplifying mode." Either there are regions of the universe with different values, and one "region" wins. Or, more attractively perhaps, the superposition of the values occurs everywhere in the spinnor ravel. In this superposition, suppose nearby sets of values of the constants can, via the ravel, interact with one another - since the particles and masses and so on entailed by very nearly the same values of the constants would hopefully still be able to interact with one another for close enough values of the sets of constants.

Conversely, sets of particles implied by distant sets of values of the constants could not interact. Then one could hope to construct a theory in which the values of the constants that were "best" would win the competition within one universe. An analogue of natural selection operating on a cosmic scale would select those sets of law, constants, and implied particles which expand the universe the fastest. Perhaps via mere amplification, or some sense of decoherence, the 'best' values of the constants would lock in.

This view might require that the values of the constants differ very slightly in different regions of this universe. They are not supposed to do so. That is a problem.

And this view has to struggle with getting the values of the constants to "fix" at the right different times of expansion of the Universe.

If we generalize, we may therefore imagine that the laws themselves, seen as the rules of transformation among spinnor networks, also evolve, such the set of all possible laws all compete all the "time" and those laws which yield the fastest growing adjacent possible always locally win. Those neighboring sets of laws and implied particles that coevolve to expand the total dimensionality of the Universe the fastest would, hopefully, sharpen to a set of winning laws and particles that explodes the fastest growing 'cosmic ecosystem' with large scale classical behavior. Then out of the superposition of all possible laws, the Universe, one might hope, would naturally propagate towards the maximal diversity and complexity. Further, "local winning" suggests that the laws may change over time and conditions. In the center of a black hole, perhaps the rules of spinnor networks change such that, always, the fastest exploration of the adjacent possible "wins," so crushing matter to a singularity is crushingly simple, crushes all alternative possibilities to nothing, and the spinnor rules change to convert matter to space more rapidly, exploding a new universe along a different, orthogonal set of decohering quantum histories of a new universe whereby again, expansion into an adjacent possible can occur.

On this image, time itself is nothing but the non-ergodic expansion that is occurring fastest, hence dominating the unfolding of events whereby geometry, simple, and complex matter and broken symmetries emerge. (Phil Anderson has pointed out to me that some physicists think of time as the expansion of the Universe. I am here extending this to include the hypotheses that geometry and matter can interconvert, and that the laws compete such that the sustainedly fastest exploding set of laws and implied particles wins.)

One possible image of this lies in Smolin's picture of operators in spinnor networks as graphs with "hands" on them, "grasping" other spinnor networks and operating on them to modify the latter. But the "hands" can appear on the modified spinnor network itself, which can then be an operator on still other pieces of the same or other networks.

One might hope that the "best operators" and "operand" spinnor networks - i.e. those that proliferate the most "rapidly" in the coevolution of operator and operand spinnor networks would "win" because by doing so they expand into the adjacent possible the fastest, yielding, via decoherence of under the winning subset of operators and operands, the present poised universe.

iii. Given the hints that an ecosystem of autonomous agents can, as if by an invisible hand acting locally on each Agent, tune its structure and behavior, perhaps these concepts may prove useful.