I attended this talk at Stanford by Chris Adami. I think his group is heading in a good direction. He speaks of evolving representations where I would speak of developing models. The distinctions are minor. I would withhold the stronger term ‘evolve’ unless the weaker term ‘develop’ is shown to be inadequate. Adami made clear the distinction between AI programmers building models and AI programmers building software that automates the creation of models thru experience. I had been vague on the vital distinction.

Adami also described the sort of circuits that thinks are suitable. Adami’s circuits are more nearly Boolean than the usual synthetic neuron. I did not understand how the circuit ideas fit with the rest of his ideas but I suspect that it is a good idea; it fits my intuition. He uses circuits with stochastic truth tables where I would suppose a mere supply of random inputs would do. I think this is not an important quibble.

I speculate that this development of models has ultimate phylogenetic roots in acquiring reflexes. An intermediate stage is causality—this causes that. But then what is the ‘this’ and ‘that’ above? Here we have the beginnings of the noun which may organize some set of similar encounters with the outside world.

I think that there are several stages of ‘going meta’ in phylogeny. This is when nature discovers (and records in our DNA) that signals in the brain that evolved for other purposes are like exogenous signal in their utility. See this where we improve by beginning to observe our internal states and become aware of ourselves like we became aware of trees.

Maybe more later.