Evolution under the philosopher’s lens

shark adaptation

Evolution through natural selection is surely the best historical analysis to account for the immense biodiversity we see in the world today. Everyone’s talking about it these days, and extending the abstract principle behind evolution – a selective algorithm – to other problem domains as diverse as the spread of religious beliefs to the growth of neurons in the brain.

And yet, does it tell us everything, or at least the most important things, that we aught to know about the nature of these wonderful dynamic systems we call life? Let us for a moment reflect on the barebones essence of evolution, and return to this pertinent question with an answer.

Proto-life consisted of replicating molecular templates. They had a molecular structure that gradually grew into a complex form, by colliding with and binding molecular building materials from the environment. This growing molecular template, once it reached a certain form, continued to bind with materials from the environment, and made a copy of it. The copy in turn grew and made another copy, and so forth.

Except that, this copy producing process was not perfect. Due to the detailed mechanics of how a template “bound together” another template, minor imperfections in form occurred. Usually this occurred due to environmental forces physically acting on the “work-in-progress” copy.

This imperfect copying introduced an amazing new time-staggered process that we could simply call “selection for robustness” in nature. That is, some templates which were slightly different from the others due to the aforesaid “copy imperfections”, were destroyed less easily by environmental forces. They were more robust. Thus they stood a higher chance of growing to the stage where they would copy themselves. In other words, there was an asymmetric culling of this proliferation of self-copying molecular templates. The templates that weren’t destroyed by the environment carried with them the modified structure (i.e. the copy imperfections that made them more robust), which in turn was passed down the “copy” generations.

These tiny modifications that caused the template to “succeed better in the environment” aggregated over countless copy generations. A range of replicators emerged that had physical configurations that made them visibly “more robust” in the environment. What were originally growing, self-replicating molecular templates, gradually transformed themselves into “dynamic molecular machines”, through the influence of a multitude of destructive environmental forces. They were the first living organisms.

They continued to become more robust, and the rest is history…

Now, in spite of the many challenges and modifications that one could throw at evolution, I sincerely doubt it can ever be falsified as a historical analysis of the complexification of life. It has withstood many intriguing criticisms in the past. One rather fascinating “spanner” thrown into evolution was the idea that organisms themselves play a bigger role in their own evolution by altering the environment around them, such as a surface creature suddenly borrowing underground, and altering its environmental pressures. So what? Life still evolves, and growth, replication and natural selection still occur.

All right. Now having said all this, let us put evolution under the lens of philosophical scrutiny.

Is evolution through natural selection an ontological fact to which we can attribute the status of a natural law, or is it a “phenomenon” in nature like (say) the photoelectric effect, or is it at least a propensity in the environment like the propensity for there to be lightning on a cloudy day? Or, strictly speaking, is evolution through natural selection only a good historical analysis, good because it provides us a way to summarize a unmentionably vast collection of independent physical events that lead to today’s biodiversity, in common language terms?

Let us for a moment digress and consider another “good story” as a sort of “humorous parallel” to evolution.

“Computers are the vastly complex tools they are today, because their manufacture is bootstrapped. There were early computers, which were used to manufacture the CPUs of the next generation of computers. The use of the earlier generation of computers to make better CPUs, resulted in more powerful and versatile new computers. This is what we call manufacture by bootstrapping. Over countless generations of computer manufacture, it caused an exponential increase in the power and versatility of the machines. Thus bootstrapping is the force which drives the increase in the versatility and power of computing”.

Now my friends, this computer story is not a tautology. There is empirical “truth” in it; bootstrapped manufacture is a sort of “historical principle” in computer engineering that explains one particular aspect of the advancement in computing capacity. The point is though, just because it is true and has explanatory value in historical terms, it is not the only truth to be learned about how computers increased in capacity. In fact, it is not even the most fascinating or pragmatically useful truth about the development of computing. The deeper engineering truths lie in the physical configurations of the machines and their functional designs. For example, the fetch-execute cycle and the early hardware that facilitated it. If a primitive man were teleported suddenly into today’s world and told about the principle of bootstrapped manufacture, he would simply be unable to do much with it. It would be true, but it would what some call a “blunt weapon”.

So is this story of evolution through natural selection. History has proven it to be a blunt weapon. Science is still struggling desperately to engineer living creatures. And in spite of the various computer simulations that are touted as “hopeful”, there is no visible progress in engineering the simplest replicating, growing, evolving system.

What has more utility to us human beings: calling evolution – a top-down explanatory construct in our minds – a “law” in nature, or understanding the exact physical configuration of the template that can replicate, and the laws that govern its behavior? The magic it would seem is in the template and its mathematics, and not in the “historical law”, in spite of its educational utility.

The better science lies in investigating the actual physical configurations that enable the various behaviors of living systems, beginning with understanding molecular-level replicators. We have mapped the human genome, but we are only just beginning to understand the problem of ontogenesis – such as the protein-folding problem. We seem nowhere near understanding the fundamental problem of creating an artificial gene, or a replicating template that can evolve in an environment.

There is a deeper philosophical implication of this story. The actual propensity, the “magic” if you like, of any given system lies in its configuration. There are ground conditions for any observable phenomenon, and there is an emergence of new systemic properties (holistic properties if you like) – the phenomenon itself. Take electromagnetic radiation for example. There is a particular ground condition to be satisfied, say a primed L-C Circuit. There is an emergent phenomenon; the electromagnetic disturbance that detaches and travels.

I believe the evolution of life went through countless such “ground states” that had emergent phenomenon, causing “runaway” evolution paths. Beginning with the basic problem of the physical configuration (or form) for the simplest molecular structure that could evolve in the first place. We are told it might have been an “RNA type” structure. But we don’t know. Why can’t a replicating structure be an exceedingly simple one to begin with? Why can’t we engineer such a simplified replicator, and cause it to chain-react with the environment, like it supposedly happened so long ago? After all, the “law” of Darwinian evolution seems to be pretty generic.

The reason we are still groping in the dark is because this growing, replicating template is not so simple, and we haven’t understood the mechanics behind it. The proto-cell and the proto-gene probably grew together, and we have no good physical understanding of what makes it chain-react with the environment. The day we do have it, the mathematical models and laws that govern the behavior of those primordial physical structures will be the “real” laws of the story of life. [NB: there is some work being done to understand replication via hypercycles for example, which offers hope]

Let me conclude by saying that classical evolution is not a tautology: it has excellent explanatory value towards understanding the complexification of life. It is a good historical analysis of the “big picture”. Crackpots who think that life was created in 4004 BC via a bolt of lightning are clean wrong. However, evolution is not a universal law of nature that explains the presence of life in the first place, or even the mechanics of growth and reproduction. I only wish that when some of us speak about evolution, we remain honest to this hard fact.

PS: I have borrowed many ideas from countless distinguished men and women of science in presenting my above viewpoint, such as Richard Dawkins, Dan Dennett, J.B.S. Haldane, Lynn Margulis, … or even my own dad. I couldn’t possibly recall them all, but I owe them the supreme debt of being a student of their ideas at one time or another.

Advertisements

That Warm Pond

Thoughts on the origin of life via replicators, and a possible missing perspective; “the self-organizing properties of matter”

– by Ruwan Rajapakse

The life-sciences, unlike the physical-sciences, are often shrouded in mystery when delivered to the general public. For example, whilst its commonplace for The Special Theory of Relativity to be described in a few words using the analogy of localized clocks and high-speed space travelers, there is no comparable analogy provided when it comes to describing the origin and nature of life. There certainly is much talk about RNA Replicators and the arising of a protocell (which functioned supposedly in a similar way to modern-day ribosomes) in the “primordial soup”. But the physical chemistry or rather the molecular bonding process for the creation of such a protocell is either not discovered (which is most likely the case) or else is never clearly described. All we have seen are long romantic accounts of how DNA was discovered, or of how Stanley Miller boiled down his primordial soup. I mean no disrespect to gents like Watson or Miller, or ladies like Franklin. On the contrary it was reading about their inspiring work and lives that made me, an uneducated layman, to dare to pen a few thoughts on the subject.

Let me immediately state the core of my hypothesis. Unless we can theoretically demonstrate the transcendence of complex molecules from static, randomly colliding bits of matter obeying the laws of chemical interaction as they are known today, into similar bits of matter that can replicate themselves and thus get subjected to evolutionary forces, we are missing an entire perspective. Now, I’m not going to propose that God in Heaven sending down His magical lightning bolt is what synthesized this first protocell (although one has to admit that one cannot rule that hypothesis entirely out, scientifically). Nope. The perspective we are missing is a bootstrapping process we don’t quite understand as yet, and whose boundary conditions are not as yet discovered. Hence Miller’s soup bubbling away to failure.  It is a bit like knowing that the “Sun Burns” and trying to figure its source of energy, without understanding Nuclear Chemistry in general and the process of Nuclear Fusion in particular. I am going to call the perspective we are missing today as “Organizing Chemistry” (not to be confused with Organic Chemistry, although the former may turn out to be a subset of the latter). Or, alternatively, we may call it “Algorithmic Chemistry“, and we shall shortly see why so.

What would be the salient philosophy of this new branch of Chemistry? Well, for starters, it would recognize that life is an operational paradigm-shift in nature and the result of a counter-entropic chain reaction. As opposed to the entropic chain reactions of Nuclear Fission or Fusion that we are presently familiar with. This Organizing chain reaction would have certain definitive boundary conditions, and may involve certain types of molecules only, with particular amounts of heat or energy levels and taking particular time scales to propagate (though not necessarily the humongous ones subtly suggested by Richard Dawkins – Chapter 13 in The Greatest Show On Earth). There would be an algorithm (or class of algorithms) underpinning the evolutionary process, once started. The convergence of many early evolutionary lines towards common archetypes suggests that there is broadly a “winning algorithm” of sorts, at least in a statistical sense. However it would be the matter in itself which would have this propensity for replication (and hence evolution), and the informational aspect in RNA or DNA would cease meaning outside their molecular structure. Matter evolves, not information. Information just seems to evolve.

At this juncture I hazard a guess that one cannot have a replicating informational entity in a computer that would evolve to anything like life, contrary to the supposed evidence available in computer simulation models as discussed, for example, by Dan Dennett. These computer models show “successful evolution” in a certain limited context, keeping in mind that the actual criteria for choosing the fittest at each generation is defined by us humans. In reality, evolutionary pressures are A) a changing dynamic and more importantly B) have a recursive relationship with the evolving creatures. In short, evolutionary pressures are tightly bound with the evolving creatures and are inseparable. The “designer’s evolution” of Dennett’s variety is a good model to prove to us that the concept of evolution by artificial selection works, but has little more to offer to the story of evolution of life on Earth.

There is a faint possibility that the correct “boundary conditions” for replication exists in the chemistry of the modern day cell. I don’t exactly mean in the context of what Craig Venter has done recently, to modify the genome “artificially” and change the characteristics of a cell. What I mean is that the most primitive of molecular replicators or artificially produced nucleic acids may be made to replicate within the modern cellular environment, rather than trying to “figure out” the parameters of the primeval oceans. The cell, after all, is like a bit of “cut off ocean” retained from billions of years ago, when the necessary boundary conditions were perhaps abundant “outside”.  So artificially synthesized replicator molecules may “have a go” (or chain react) in today’s cells. At least, its worth a try to “mess around” with cells and complex molecules.

Let me leave you with this concluding thought. It has been supposed by Dawkins and others that the original protocell or replicator was a “one off” chance “knocking together” of the right kind of organic molecules after billions of years, and its arising was an astronomically rare event (Chapter 13 in The Greatest Show On Earth). For example, Dawkins mentions that it probably didn’t happen too many times on earth, maybe only just once. The implication is that the origin of life is such a rare, time consuming bit of chemistry that its basically a fluke. This is the core concept which I refute; I believe that given the right conditions life would arise readily, and it would do so because it is a fundamental property of matter to get triggered into a self-organizing process. Under the right conditions, which we are yet to discover.

%d bloggers like this: