Leslie ValiantValiant gives names to familiar processes to help us think and theorize about learning. He outlines desiderata of a theory of learning which is very near to what it means to be intelligent. I mostly agree with them. The perspective is more nearly mechanical than psychological.
L 412: Valiant wants a more quantitative theory of evolution. Sounds like a good idea.
L 420: I am getting a whiff of ‘learning to learn’.
L 588: “Prior to Turing, mathematics was dominated by the continuous mathematics used to describe physics, in which (classically, anyway) changes are thought of as taking place in arbitrarily small, infinitesimal increments.”
Don’t forget number theory. Euclid proved the infinitude of primes. 19th century number theory remains graduate fodder today. Not to mention ‘combinatorics’.
L 643: Don’t take the lower bound on multiply complexity too seriously because it is actually better than Valiant says for numbers of several thousand bits. (Indeed see Karatsuba result later.) Few results are proven for lower bounds.
L 777: The public key presentation is very good—Collatz’s problem too.
L 999: It is indeed noteworthy that the Turing tape and DNA are one dimensional and digital. I had not noticed this. Turing’s machines are pathologically inefficient and I hope that Valiant does not try too hard at Turing tape algorithms.
L 1004: I think that ‘protein expression circuit’ needs a long clear definition. I have a long muddy and probably wrong impression of it: Some times, (as contrasted with other times in the same cell), the concentrations of various proteins control what other proteins the DNA produces by blocking or facilitating some particular gene for that latter protein. This influence is in part by the controlling proteins sticking to introns near the gene.
Valiant proceeds to describe the notion further, somewhat complementing my description.
I am quite sure that the first introns said something like: “Don’t do the next exon if there is much niacin about.” Some proteins will evolve only to become part of a circuit and thus support more indirect rules while the primitive rules remain simple. This is a far easier space to explore than the function class described by Valiant.
L 1069: Concerning the connection between cognition and computing:
L 1179: The URN: Valiant should say “draw without replacement” to make sense of the rest of his text.
L 1341: Valiant makes points much like this and this. This chapter could be titled “Math Envy” of philosophers who try to make ordinary knowledge and logic meet mathematical standards. Induction works just fine for purposes of survival. This is a complementary conundrum to the observation of the unreasonable effectiveness of mathematics.
L 1571: I think that abstraction and ‘clumping’ are necessary concepts here in an adequate theory of learning.
L 1592: I think that Valiant’s ‘teacher’ concept does not exclude a book, especially a text book. Interactive teachers, human or computer, have advantages.
L 1637: Valiant asks how intelligence might have evolved. Bravo! Many biologists won’t even admit that intelligence is an important human characteristic.
L 1770: In Valiant’s conception of ‘ideal function’ makes it important that the class of functions not include too many functions that cannot be expressed in DNA. It may be 100 years too soon to achieve Valiant’s program.
At this point in the book I record the following. I think it may be fruitful to speak of evolution as a form of learning. There is a sense in which even plants understand things about their environment. I think it is useful to speak of algorithms the higher animals employ to learn. I think it is not useful, however, to seek the ‘algorithm’ by which evolution learns. When animals learn there are things happening in the brain which is the site of the algorithm. There is no such site, nor algorithm, in the case of the process by which evolution learns.
L 1981: I am queasy about the notion that the genome must approach the ‘ideal function f’ in order for the species to thrive. It must soon avoid fatal consequences and thereafter merely meander uphill, ‘up’ being a multidimensional vector. What bothers me is that each individual contributes only about one bit of in this learning. Of course there are only about 1010 bits to be learned which I find a surprisingly small number. We need several times that number of ancestors, but not that many generations.
L 2018: I think that valiant assumes that evolution is working on only one problem at a time and that one SNP is on trial for the solution of just one problem. He still seems to think that we need to find nature’s algorithm. There is no such algorithm for there is no site at which to express it. The mechanisms are already all in plain sight but we may not have noticed all of the various ‘failure modes’ of DNA copying which are in fact what progress is due to. Valiant seems to argue for the existence of something more obscure or even mystical. Here is my take.
Did we evolve to evolve? Well, the invention of DNA was a such a sort of thing. DNA copy correction mechanisms may have improved evolution by slowing it down.
L 2087: As I read the book I find Valiant relying too heavily on aspects of computing theory that I found questionable as he introduced them. Computers do all sorts of useful exponential things but are yet totally incompetent of solving some polynomial problems. We do not live in asymptopia, and neither does nature.
L 2123: I still object to reasoning about the ‘ideal solution’ f. It smells of teleology. It might be rescued if one could show that choice of f is immaterial, but I can’t see how to do that.
L 2228: I would accuse Valiant of “syllogistic” reasoning in place of “inductive” reasoning in this context, where he praises inductive reasoning, but take that with a grain of salt. At least he has noticed that some computers induce. Incidentally doing math requires much induction.
L 2280: I like section 7.2 and have some quibbles. Valiant contrasts reflexes and reasoning. ((My insight is that reasoning is itself a form of reflex, but this is not to denigrate Valiant’s points. It is an evolutionary explanation.)) We acquire reflexes and also patterns: “There are no rabbits to be found when it is raining.”. These are stored in different parts of the head. The latter patterns are rather like the rules that DNA conveys on when to produce some protein. The logic of higher animals is analogous to but not a descendent of these intron rules. It started out simple and largely remains simple today. (Wrong context.)
L 2436: I love the Galton quote.
L 2476: In contrast to Valiants use of computability theory, I think his analogies between brain and computer hardware are good and highly relevant.
L 2689: Mostly we do not live in a physical world, we live mainly in a cultural world. Our heads mainly process expressed thoughts of others, or wonder what others are thinking. Maybe Eagles live in a more physical world.
L 2892: Valiant keeps saying that computer learning is already common place. He should enumerate a few concrete successes. Valiant goes on to mention a few later. Google’s machine translation is now mostly learned in Valiant’s sense and people can sample its worth. (It is useful.)
Turing’s computing theories are very poor quantitatively. Distinguishing between P and NP won’t do it. I agree that it needs to be done. People working with neural nets might come up with something sometime, but not soon I think.