Dawkins and the Monkeys

“Ford! There’s and infinite amount of monkeys outside who want to talk to us about this script for Hamlet they’ve worked out.” — Arthur Dent, The Hitchhiker’s Guide to the Galaxy by Douglas Adams

Earlier this year, I produced two posts citing mathematician John C. Lennox’s book, God’s Undertaker. The first was about Galileo’s legendary fight with the Roman Catholic Church about his astronomical discoveries, and the second was about the equally legendary debate between T.H. Huxley and Bishop Samuel Wilberforce regarding the strengths and weaknesses of Darwin’s new theory on the origin of species by means of natural selection.

Now, I hadn’t planned on doing any more posts from this book, but… well… I’m going on vacation for a week or so and needed a couple posts that would be relatively easy to prep and schedule ahead of time. So, this week we’ll look at Richard Dawkins’ (in)famous illustration of monkeys producing the works of Shakespeare.

— — —

“Richard Dawkins contends that unguided natural processes can account for the origin of biological information — no external source of information is necessary. In The Blind Watchmaker he uses an analogy whose roots like in an argument alleged to have been used by T.H. Huxley in his famous debate with Wilberforce in Oxford in 1860. Huxley is said to have argued that apes typing randomly, and granted long life, unlimited supplies of paper and endless energy, would eventually type out one of Shakespeare’s poems or even a whole book, by chance.

Well, it is hardly likely that Huxley said such a thing for the simple reason that typewriters were not available on the market until 1874. But no matter. It is a nice story and, within the limit now set for the age of the universe, let alone that set for the earth, it is easy to see that it is mathematical nonsense. The eminent mathematician Gian-Carlo Rota in a book on probability (unfinished at the time of his death) wrote: ‘If the monkey could type one keystroke every nanosecond, the expected waiting time until the monkey types out Hamlet is so long that the estimated age of the universe is insignificant by comparison… this is not a practical method for writing plays.’

The calculations are not hard to do. For example, Russell Grigg, in his article ‘Could Monkeys Type the 23rd Psalm?’, calculates that if a monkey types one key at random per second, the average time to produce the word ‘the’ is 34.72 hours. To produce something as long as the 23rd Psalm (a short Hebrew poem made up of 603 letters, verse numbers and spaces) would take on average around 10^1017 years. The current estimate of the age of the universe lies somewhere between four and fifteen times 10^9 years. According to Dawkins’ definition, this calculation certainly makes the 23rd Psalm a complex object: it possesses ‘some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone’.

Since 1 July 2003 there has been a monkey typewriting random number generator simulator operating which simulates monkeys typing one key per second. They started with 100 monkeys and this number doubles every few days — and of course there is an unlimited supply of bananas. The current record is 24 consecutive letters from Shakespeare’s Henry IV produced in around 10^40 monkey years (the age of the universe is estimated at less than 10^11 years).

Chimps (which are apes, not monkeys) seated at old-fashioned typewriters

Calculations of this kind have long since persuaded most scientists — Dawkins included — that purely random processes cannot account for the origin of complex information-laden systems. Dawkins cites Isaac Asimov’s estimate of the probability of randomly assembling a haemoglobin molecule from amino acids. Such a molecule consists of four chains of amino acids twisted together. Each of the chains consists of 146 amino acids and there are 20 different kinds of amino acid found in living beings. The number of possible ways of arranging these 20 in a chain 146 links long is 20^146, which is about 10^190. (There are only about 10^70 protons in the entire universe.)

We remind the reader of Dawkins’ unequivocal conclusion: ‘It is grindingly, creakingly, crashingly obvious that, if Darwinism were really a theory of chance, it couldn’t work. You don’t need to be a mathematician or a physicist to calculate that an eye or a haemoglobin molecule would take from here to infinity to self-assemble by sheer higgledy-piggledy luck.’

Sir Fred Hoyle and astrophysicist Chandra Wickramasinghe share Dawkins’ view — on the capabilities of pure chance processes, that is. ‘No matter how large the environment one considers, life cannot have had a random beginning. Troops of monkeys thundering away at random on typewriters could not produce the works of Shakespeare, for the practical reason that the whole observable universe is not large enough to contain the necessary monkey hordes, the necessary typewriters and certainly not the waste paper baskets required for the deposition of wrong attempts. The same is true for living material. The likelihood of the spontaneous formation of life from inanimate matter is one to a number with 40,000 noughts after it… It is big enough to bury Darwin and the whole theory of evolution. There was no primeval soup, neither on this planet nor on any other, and if the beginnings of life were not random, they must therefore have been the product of purposeful intelligence.’

[We will fast-forward past a couple pages of discussion about Dawkins’ The Blind Watchmaker and his attempts to drastically reduce the probabilities by proposing a target phrase by which to compare each letter…]

We note that Dawkins’ model involves both chance (the typing monkeys) and necessity (the law-like algorithm that does the comparing of an attempt with the target phrase). His algorithm measures what is called the ‘fitness’ of a solution by calculating the difference or ‘distance’ of that solution from the target phrase.

We have now reached the heart of Dawkins’ argument. Remember what it claims to show — that natural selection — a blind, mindless, unguided process — has the power to produce biological information. But is shows nothing of the kinds. Dawkins has solved his problem, only by introducing the two very things he explicitly wishes at all costs to avoid. In his book he tells us that evolution is blind, and without a goal. What, then, does he mean by introducing a target phrase? A target phrase is a precise goal which, according to Dawkins himself, is a profoundly un-Darwinian concept. And how could blind evolution not only see that target, but also compare an attempt with it, in order to select it, if it is nearer than the previous one?

Dawkins tells us that evolution is mindless. What, then, does he mean by introducing two mechanisms, each of which bears every evidence of the input of an intelligent mind — a mechanism that compares each attempt with the target phrase, and a mechanism which preserves a successful attempt? And, strangest of all, the very information that the mechanisms are supposed to produce is apparently already contained somewhere within the organism, whose genesis he claims to be simulating by his process. The argument is entirely circular.

It should be noted that it is this feature that distinguishes Dawkins’ mechanism from an evolutionary algorithm. Evolutionary algorithms are well known from engineering and other applications as excellent, well-tried ways of finding a solution to a complex problem. For instance, Rechenberg demonstrated an evolutionary strategy whereby the electrical resistance of a complex system could be minimized by successive applications of random variations. At each ‘evolutionary step’ the systems parameters are varied arbitrarily and the resistance measured. If the variation leads to increased resistance it is reversed; if to decreased resistance it is retained and used as the starting position for the next step. Such an evolutionary strategy assumes that a measurable parameter exists which one wishes to optimize — for instance, one might wish to minimize electrical resistance. With the objective of minimizing the resistance, the model tests all possible forms reached by chance variation and eventually produces the previously unknown optimal form.

Thus, and this is the important point here, at the beginning of the process the solution is not known. In the Dawkins scenario the exact opposite is the case, as we have just seen. So it would be rather naive to argue that Dawkins’ simulation is plausible because of the success of evolutionary algorithms.

Indeed, mathematician David Berlinski in a much-discussed article rather trenchantly comments: ‘The entire universe is… an achievement in self-deception. A target phrase? Iterations which resemble the target? A computer or Head monkey that measures the distance between failure and success? If things are sightless how is the target represented, and how is the distance between randomly generated phrases and the targets assessed? And by whom? And the Head Monkey? What of him? The mechanism of deliberate design, purged by Darwinian theory on the level of the organism, has reappeared in the description of natural selection itself, a vivid example of what Freud meant by the return of the repressed.’

Oddly, Dawkins admits that his analogy is misleading, precisely because cumulative natural selection is ‘blind to the goal’. He claims that the programme can be modified to take care of this point — a claim that, not surprisingly, is nowhere substantiated, since it cannot be. Indeed such a claim, even if it were true, would serve to establish the exact opposite of what Dawkins believes, since modifying a programme involves applying yet more intelligence to an intelligently designed artefact — the original programme.

Dawkins’ more sophisticated biomorph programme — a computer package in which the computer generates certain shapes to be displayed on the screen, which the computer operator can select for their elegance, etc., leading to increasingly complex patterns called biomorphs — similarly involves an intelligently designed filtering principle. Remove the filtering principle, the target and the Head Monkey, and you end up with gibberish. For their plausibility, then, Dawkins’ analogies depend on introducing to his model the very features whose existence in the real world he denies.

an early, self-winding watch

What Dawkins has really shown is that sufficiently complex systems such as languages of any type, including the genetic code of DNA, are not explicable without the pre-injection of the information sought into the system.

A simpler example of what is going on here is provided by a self-winding watch. Such a device uses the random movements of wrist and arm to wind itself up. How does it do that? An intelligent watchmaker has designed a ratchet that allows a heavy flywheel to move in only one direction. Therefore it effectively selects those movements of wrist and arm that cause the flywheel to move, while blocking others. The ratchet is a result of intelligent design. Such a mechanism, according to Dawkins, cannot be Darwinian. His blind watchmaker has no foresight. To quote Berlinski again:

‘The Darwinian mechanism neither anticipates nor remembers. It gives no directions and makes no choices. What is unacceptable in evolutionary theory, what is strictly forbidden, is the appearance of force with the power to survey time, a force that conserves a point or a property because it will be useful [like the ratchet in the watch]. Such a force is no longer Darwinian. How would a blind force know such a thing? And by what means could future usefulness be transmitted to the present?’

[After some discussion of more problems with Dawkins’ analogy in relation to incredibly complex machines, Lennox says…]

It should also be noted in passing that the fact that a correctly typed key is retained, never to be lost again, is equivalent to making the assumption that advantageous mutations are always preserved in the population. But, as evolutionary biologist Sir Ronald Fisher showed in his foundational work, this is not the case in nature.

Most beneficial mutations get wiped out by random effects, or by the likely much larger number of deleterious mutations. This contradicts the idea commonly held since Darwin, that natural selection would preserve the slightest beneficial variation until it took over the population. It also gives further evidence for the irreducible complexity argument — as illustrated earlier by Behe’s combination lock: an ‘advantageous’ mutation is only advantageous if it occurs simultaneously with a large number of other ‘advantageous’ mutations — which is the fatal flaw in the ‘target phrase’ argument for the typing monkeys.”

— — —

Whether or not you are a fan of Dawkins, I hope you can see that Lennox et al. have “schooled” him on this matter. In fact, I would say that Lennox has made a monkey out of Dawkins. As usual, facts and logic do the trick — no nastiness required.

Like!
0

Tags: , , , , , , , , , , , , , ,

Leave a Comment

CommentLuv badge