Another widespread erroneous view of natural selection must also be refuted: Selection is not teleological (goal-directed). Indeed, how could an elimination process be teleological? Selection does not have a long-term goal. It is a process repeated anew in every generation.
Mayr, E. (2001). What Evolution Is. New York: Basic Books. p. 121
Yet Mayr also writes:
When the selective advantage of a skeleton developed among the ancestors of the vertebrates and of the arthropods, the arthropod ancestors had the prerequisites for developing an external skeleton, and the vertebrate ancestors for developing an internal skeleton. The entire evolution of these two large groups of organisms has since been affected by this choice among their remote ancestors.
(ibid, p. 141, emphasis added).
Evolution is an opportunistic process. Whenever there is an opportunity to outcompete a competitor or to enter a new niche, selection will make use of any property of the phenotype to succeed in this endeavor.
(ibid, p. 221, emphasis added).
Likewise, we read:
The legitimate use of the term adaptation is for a property of an organism, whether a structure, a physiological trait, a behavior, or anything else that the organism possesses, that is favored by selection over alternate traits. But the term also has been used quite incorrectly for the process ("adaptation") by which the favored trait was actively acquired. This view can be traced back to the ancient belief that organisms had an innate capacity for improvement, for steadily becoming "more perfect." Also, if one accepts an inheritance of acquired characters, activities such as the straining of the neck by giraffes "adapts" the neck to an improved construction. In this view, adaptation is an active process with a teleological basis. Some recent authors still seem to look at adaptation as such a process and therefore reject the whole concept of adaptation. But this is not defensible.
(ibid, p. 150).
Yet of adaptations, we read:
The shift from the quadropedal locomotion of a lizardlike reptile to bipedalism and flight in birds initiated a considerable restructuring of the body plan: a compacting of the whole body to have a better center of gravity, the development of a more efficient four-chambered heart, restructuring of the respiratory tract (lungs and air sacs), endothermy, improved vision, and an enlarged central nervous system. The acquisition of all of these adaptations was a matter of necessity.
(ibid, p. 219, emphasis added).
But Mayr is not the only one who falls prey to this. Indeed, when trying to describe their theories Darwinists are forced to use teleological representations. For instance, Gould wrote:
The model of the grabbag is a taxonomist's nightmare and an evolutionist's delight. Imagine an organism built of a hundred basic features, with twenty possible forms per feature. The grabbag contains a hundred compartments, with twenty different tokens in each. To make a new Burgess creature, the Great Token-Stringer takes one token at random from each compartment and strings them all together. Voilà, the creature works--and you have nearly as many successful experiments as a musical scale can build catchy tunes. The world has not operated this way since Burgess times. Today, the Great Token-Stringer uses a variety of separate bags--labeled "vertebrate body plan," "angiosperm body plan," "molluscan body plan," and so forth. The tokens in each compartment are far less numerous, and few if any from bag 1 can also be found in bag 2. The Great Token-Stringer now makes a much more orderly set of new creatures, but the playfulness and surprise of his early work have disappeared. He is no longer the enfant terrible of a brave new multicellular world, fashioning Anomalocaris with a hint of arthropod, Wiwaxia with a whiff of mollusk, Nectocaris with an amalgam of arthropod and vertebrate.
Gould, S. J. (1989). Wonderful Life. New York: W. W. Norton & Company, Inc. p. 217-218
Naturally, Gould was trying to be poetic; but one wonders if it is even possible for him to explain his “grabbag” idea without resorting to the teleology of a designer (in the above case, the “Great Token-Stringer”). One suspects not. And those outside the field of biology are oblivious to the fact that evolution is supposed to be non-teleological. In fact, they see quite the opposite. For example, James Gleick in his book on the Chaos Theory wrote:
In science, on the whole, physical cause dominates. Indeed, as astronomy and physics emerged from the shadow of religion, no small part of the pain came from discarding arguments by design, forward-looking teleology--the earth is what it is so that humanity can do what it does. In biology, however, Darwin firmly established teleology as the central mode of thinking about cause. The biological world may not fulfill God's design, but it fulfills a design shaped by natural selection. Natural selection operates not on genes or embryos, but on the final product. So an adaptationist explanation for the shape of an organism or the function of an organ always looks to its cause, not its physical cause but its final cause. Final cause survives in science wherever Darwinian thinking has become habitual. A modern anthropologist speculating about cannibalism or ritual sacrifice tends, rightly or wrongly, to ask only what purpose it serves. D'Arcy Thompson saw this coming. He begged that biology remember physical cause as well, mechanism and teleology together. He devoted himself to explaining the mathematical and physical forces that work on life. As adaptations took hold, such explanations came to seem irrelevant. It became a rich and fruitful problem to explain a leaf in terms of how natural selection shaped such an effective solar panel. Only much later did some scientists start to puzzle again over the side of nature left unexplained. Leaves come in just a few shapes, of all the shapes imaginable; and the shape of a leaf is not dictated by its function.
Gleick, J. (1987). Chaos: Making a New Science. New York: Penguin Books. p. 201-202
Because of this cognitive dissonance, teleology works well against Darwinists. If something looks designed, the simplest and straightforward reason is that it’s because it was designed. It is because of how much design is apparent in the living world that Dawkins had to take the time to pen The Blind Watchmaker in the first place. If nature didn’t have the designed appearance of a watch, Dawkins wouldn’t have needed to try to come up with an alternate explanation for it.
So teleology has found a niche in anti-Darwinian circles. I, however, would like to expand it out a bit further than that. Most recently, I’ve been studying cryptology as part of my endeavors to better understand such things as information theory, etc. Cryptology is also important since I enjoy dissecting Darwinist arguments and DNA happens to be very prominent in many of them. Since DNA is a “living code” understanding certain principals of cryptology can be beneficial.
Surprisingly, however, my thoughts have strayed from their original course in biology. The living order is teleological, and it is difficult for anyone to honestly look at it and yet still deny the inherent design. But so too is the non-living universe. Teleology surrounds us everywhere we look. It is not just in living systems, but anywhere that there is a system. And because of that, my original focus and my original purpose for reading up on cryptology (besides the fact that I’m weird and actually enjoy the subject) has expanded somewhat.
All reality is teleological.
Since my thinking has come about as the result of reading on cryptology, it perhaps wouldn’t hurt if I gave the specific example that got me thinking on this issue. William Friedman, who was instrumental in the US breaking of the Japanese cipher PURPLE in World War II, wrote The Index of Coincidence and Its Applications in Cryptography in 1920 when he was 28 years old. It was later updated somewhat after Friedman found the solution for a cipher machine using cryptographic rotors. David Khan, in The Code-Breakers, illustrates the theory in this manner:
Imagine an urn containing one each of the 26 letters of the alphabet. The chance of drawing any specified letter, say r, is one in 26, or 1/26. Now imagine another, identical urn. The chance of drawing an r is equally one in 26, or 1/26. What are the odds of drawing a pair of r’s, one after another, in a two-draw situation? The likelihood of drawing the second r is 1/26 of the chance of drawing the first, which is 1/26. So the chance of drawing two r’s in a single event, or “simultaneously,” one from each urn, is 1/26 x 1/26. Similarly, the probability of drawing two a’s is 1/26 x 1/26, of two b’s 1/26 x 1/26, and so on. Consequently, the chance of drawing a pair of letters—any pair of letters, no matter which pair may come up—is the sum of all these probabilities. It is (1/26 x 1/26) + (1/26 x 1/26) + … + (1/26 x 1/26), repeated 26 times, or 26 x (1/26 x 1/26), or 1/26. This quantity may be written as the decimal 0.0385.
Assume now an ideal cryptosystem whose ciphertexts yield a perfectly flat frequency count—one with as many a’s as b’s as c’s…as z’s. Polyalphabetics approach this in varying degrees and may, for practical purposes, be regarded as generating such ciphertexts. These texts are called “random” because they are what would be obtained if letters were drawn at random from the urn (each letter being replaced after being noted and the urn shaken to mix the lot, chance alone dictating their identities). If two such random texts are superimposed, the chance that the letter above will be the same as the letter below is the same as the chance of drawing a pair of identical letters from the two urns. This is 0.0385, or, to put it another way, there will be 3.85 such coincidences in every 100 vertical pairs. Experiment will confirm this.
Now imagine an urn filled with 100 letters of English in the proportion in which they are used in normal text—8 a’s, 1 b, 3 c’s, 13 e’s, and so on. The chance of drawing a specified letter is now proportional to its frequency. The probability that an a will emerge is 8/100ths, that a e will is 13/100ths. With two such urns, the chance of drawing two a’s is, as before, the product of the individual probabilities, or 8/100 x 8/100; the chance of drawing two e’s is consequently 13/100 x 13/100. And the probability of drawing a pair—any pair—of identical letters is the sum of all these pair-probabilities: (8/100 x 8/100) + (1/100 x 1/100) + (3/100 x 3/100) …, and so on through all 26 letters. This calculation has been made (with a slightly different frequency table). The result is 0.0667.
These two plaintext urns may likewise be replaced by two strings of plaintext. If they are superimposed, there will be as much likelihood that two letters will coincide vertically as there was that two identical letters will be drawn from the two urns. This probability is 0.0667, or 6.67 coincidences per 100 paris. For example:
text A wheninthecourseofhumaneventsitbecomesnecessaryforo
text B fourscoreandsevenyearsagoourfathersbroughtforthupo
text A (cont.) nenationtodissolvethepoliticalbandsthathaveconnect
text B (cont.) nthiscontinentanewnationconceivedinlibertyanddedic
There are just seven coincidences in the 100 pairs—precisely what theory predicts.
…[O]ne must recognize first that the superimposition of two monalphabetically enciphered texts will result in the…figure of about 6.67 coincidences per 100 vertical pairs, or 6.67 per cent of coincidences. This is because the coincidences will occur whether the letters are clothed in ciphertext disguises or not. The calculation does not ask the letters for their identities. It merely notes their coincidence. By the same token—and this is important—two polyalphabetic cryptograms enciphered in the same key and superimposed so that the two occurrences of that key are in synchronization with one another will also show 6.67 per cent of coincidences. The reason is this: In a correct (in-phase) superimposition, the two letters of each vertical pair have the same keyletter. Thus whenever a coincidence occurs in the plaintext, the letters of the pair will be identically enciphered. This results in an identical pair—a coincidence—in the ciphertext. It does not matter that a pair of e’s may be enciphered into V’s at one point and into Q’s at another, or that a coincidence of a’s becomes a coincidence of L’s here and a coincidence of F’s there. The toal number of coincidences will remain the same as the number in the plaintext.
On the other hand, if the two cryptograms are improperly superimposed, so that the keys are not in step, any coincidences will result from different keyletters operating on different plaintext letters to accidentally produce the same ciphertext letter. The coincidences will be caused, in other words, by chance. Chance alone will produce 3.85 coincidences per 100 vertical pairs in random text, and polyalphabetic ciphertext is equivalent to random text. Hence an incorrect superimposition should yield about 3.85 per cent of coincidences. But 3.85 per cent is substantially less than 6.67 per cent, and so a comparison of the percentages of coincidences at various test superimpositions should show which superimposition is correct.
An example should make things clear. A cryptosystem with the Vigenère running key THE BARD OF AVON IS THE AUTHOR OF THESE LINES…starts the key for the first message with the first keyletter, but starts the key for successive messages with the third, fifth, and so on, keyletters. If plaintext 1 is If music be the food of love, play on, and plaintext 2 is Now is the winter of our discontent, the encipherment will be these:
plaintext 1 ifmusicbethefoodofloveplayon
ciphertext 1 BMQVSZFPJTCSSWGWVJLIOLDCODHU
plaintext 2 nowisthewinterofourdiscontent
ciphertext 2 RPWZVHMERWABWKVJOOKKWJQTGAIFX
A cryptanalyst, receiving these two cryptograms, will superimpose them so that they start at the same point:
ciphertext 1 BMQVSZFPJTCSSWGWVJLIOLDCODHU
ciphertext 2 RPWZVHMERWABWKVJOOKKWJQTGAIFX
Since there are 28 vertical pairs, the cryptanalyst would expect 28 x 0.0667 coincidences or 1.8676, or about 2, for a proper superimposition. But in fact he finds none, so he shifts the second cryptogram one space to the right and tries again. There will now be 27 vertical pairs. The cryptanalyst again calculates the theoretical expected number of coincidences for random and for correctly superimposed texts of this length so that he may compare the values with what he actually observes. Thus, a wrongly superimposed text would yield 27 x 0.0385 = 0.9695, or about 1 coincidence that would produced by chance alone, while a correct superimposition would yield 27 x 0.0667 = 1.2369. (These fractional differences become more pronounced with longer texts.) One coincidence appears….
Since the differences between the chance and the caused values are so slight with so few letters, the cryptanalyst might wonder whether this is not in fact a random result (which in fact it is…) and try the next superimposition. Here the number of coincidences immediately jumps. This superimposition is obviously correct.
ciphertext 1 BMQVSZFPJTCSSWGWVJLIOLDCODHU
ciphertext 2 RPWZVHMERWABWKVJOOKKWJQTGAIFX
If the cryptanalyst wishes to continue, he will find that at the next superimposition the number of coincidences falls again, to 2, and will return to begin his attack with the third superimposition…
Kahn, D. (1967, 1996). The Codebreakers. New York: Scribner. p. 377-380.
With this as the immediate background, I’ll simply note how my train of thought has progressed. When dealing with language, we are dealing with something that we know is designed. Language requires intelligence, and this is even more evident when it comes to written text. Because text is a product of intelligence, it will always display the hallmark of intelligence. One will be able to differentiate between that which is designed and that which is random.
The above examples demonstrate it beautifully. Take the illustration of putting the opening line of the Declaration of Independence above the opening of the Gettysburg address. Because both texts were written in English, and because English is designed rather than random, English traits will carry through. There will be vertical alignment of almost 7%. Random texts only have 3%. Because this is the case, even hiding English within a cipher does not destroy these traits, although it obscures it at first glance.
Design, therefore, is something that would permeate everything. It might not be immediately apparent at first glance, but there will be traits that can be sought mathematically that will yield results nowhere near what random results would give us.
Now obviously when one thinks about living systems, one can see that there are processes at work that are not random. Even the relatively simple actions of an ion pump inside a cell demonstrate values that are not what one would find in a random environment. A cell becomes charged due to the existence of these ion pumps (which is how the electrical pulse can travel the nerve), but under random circumstances the charge would dissipate.
Indeed, when thinking of what is truly random one immediately must think of entropy. The less entropy there is in a system, the less random it is. If a room has low entropy, it is because everything is ordered. If it has high entropy, it is randomized. The more ordered something is, the less random it must be.
This brings us immediately to questions of the universe as a whole. And not just in terms of entropy amongst galaxies and such. Instead, I want to ask more foundational questions.
Suppose we see iron filings arrayed on a table next to a magnet. The filings will lay in a particular pattern and won’t lay randomly. Why is this the case? Of course the immediate answer is because magnetic forces have arranged the iron filings in that manner. But why is it that magnetic forces would act in that manner? We can dig into the quantum levels, perhaps. But that merely begs the question: why is it that those quantum particals act the way they do? What is it that causes electrons to be repulsed from one another? What is it that causes protons to attract electrons? Why is it that these things always happen this way, that there is no variance…no randomness to it?
Even things that are apparently random turn out to hide hidden order. Take radioactivity for instance. Radioactive elements are used to produce random cipher keys even, because no one can predict when an alpha particle will decay. But despite how “random” the decay is, radioactive elements always decay at a specific rate. Despite the random nature, there is an over-riding law that stipulates what the half-life of that radioactive element will be. We may not be able to predict when the next alpha particle will decay, but we know that after a set amount of time exactly half of the element will have decayed.
Is that not an instance of the non-random showing itself? Like the cipher text that cannot help but display the design of the English language, if one but knew where to look, don’t the underlying laws that govern all the universe scream out that there is underlying order to even what we think is chaos?
Earlier I quoted Gleick’s comment about the shape of leaves, which are governed not by forces of Natural Selection but instead by fractal designs. The key there is “designs.” All of reality is based on these deep, inherent designs. And these designs cannot be random because they are, in fact, distinct from what we would see in a purely random field.
Naturally I know that some chaoticians say that order springs from chaos, and they will use mathematical representations of chaos to illustrate this…all the while ignoring the fact that the mathematical system that they are using to generate those fractals is itself non-chaotic. Indeed, as some may already know I’ve spent lots of time playing with what I call the “Factor Field.” It’s an Excel program that I made (you can e-mail me if you want a copy using my yahoo account. Simply put “petedawg34” and follow it with “@” and finish with “yahoo.com”, and yes defeating spambots is always fun). The Factor Field is simply a graphical representation of integers. The left-most column counts by 1. The second column by 2s. Etc. Because I used Excel, it only shows 256 wide, but it goes 65,536 deep. Here is but one example of what you can see at cell number 60,480:
This shows what I call a “starburst” pattern. You can also see the skeletons of parabolas in there, as well as many different lines of various slopes. All this was created by putting integers in patterns next to each other.
If you were to isolate some of the pixels on the right side of the graphic, the dots would look very chaotic. There would not appear to be any particular rhyme or reason for any of them to be where they are. Yet they came about due to a specific rule. There is an underlying order that created the seeming randomness that is seen. And stepping back, viewing it from the distance where one can see the whole starburst, the order is obvious.
Likewise, the factor field can make it easy to find if a number is a prime number, but it doesn’t make it any easier to predict prime numbers that aren’t shown on the graph (although via observation, I hypothesize that all prime numbers greater than 3 are numbers that end in either 1 or 5 in base-6, but that’s another blog post for another time). One can tell that previous portions of the graph affect later portions, but it is so complex that it is difficult for humans to predict how the effects will play out “off screen.”
This interplay of chaos and order is only possible because the structure of the factor field is built on order. It’s an order that displays chaotic behavior later on, but it remains order. Likewise, all the representation of chaos theory are built on mathematical models that are, themselves, strict. Math doesn’t randomly make 1 + 1 = 7. It cannot happen. And the rules of chaos mean that doing the same math formula over with the exact same data will yield the exact same result. That there are wild differences if the data is even minorly tweaked doesn’t change the fact that not tweaking it yields identical results.
In other words, even in the most random systems we can think of, because they are real, have order underlying them. Reality is not random. Reality is, at heart, the opposite of random. And what is the opposite of randomness?