Thursday, December 03, 2009

On Scientific Method

Continuing my look into the ongoing goings on involving AGW (Alarmist Global Warming), I present the second shot in my opening salvo. Here is a slightly edited version of a paper I wrote a couple of years ago, On Scientific Method. And I think for this version, the subtitle: Why Should Steve Hays Have All The Really Long Posts on Triablogue? is apropos.



On Scientific Method

Scientists are some of the highest regarded people in Western society. Virtually everyone looks up to scientists as being discoverers and defenders of Truth (with a capital T). To a large extent, this is because as far as the public is concerned, science works. We wake up in the morning to the sounds of our alarm clocks, we get our coffee from the electrically powered pot that automatically turns on for us, and we drive cars full of sophisticated gadgetry in to our jobs, where we sit at a desk under the glow of fluorescent light bulbs and type information into a complex computer network. All of this is made possible due to the extent of technology driven by science.

It is no wonder that scientists are highly regarded then. Imagine how different the world would be if we were unable to get our coffee in the morning!

Despite the fact that scientists are so highly regarded, it is a rare individual who is actually able to determine what, by definition, science actually is. To many, the word “science” tends to bring to mind images of scrawny geeks in white lab coats playing with beakers, or possibly someone of Germanic descent sporting a wild hairdo. But these stereotypes tell us absolutely nothing about what science is.

If we were to ask someone who has just completed a scientific course, such as a high school class on biology or a college course in geology, we would get a better answer. The typical response would go something like this: Science is defined by following the scientific method [1], which begins with a scientist observing something. From this observation, the scientist makes certain predictions in the form of a hypothesis. The hypothesis is then tested via experimentation. If the results do not match the hypothesis, the scientist revises his hypothesis. This process continues until the results of the experiments confirm the hypothesis, at which point the scientist can publish his paper in a peer-reviewed journal and wait for other scientists to repeat the process. If enough scientists are able to duplicate the experiment, the hypothesis eventually becomes a well-established theory; and if the theory is confirmed over time, eventually the theory may become a scientific law.

The above process is roughly equivalent to what Moti Ben-Ari called the naïve inductive-deductive method (Ben-Ari, 2005, p. 5). In its most basic form, there are four critical steps to this method: 1. observation of something or some event; 2. induction (i.e., making a theory); 3. deduction (i.e., making predictions based off the theory); and 4. experimentation (i.e., testing the predictions to see if they conform to the theory). This four-step process ends up making a loop because the scientist observes the experiments and then uses those observations to form new theories, etc.

Naturally, the above definition requires us to define some other terms as well, the most important of which being the definition of a scientific theory. This, too, is a term that is quite often defined incorrectly by the general public. Ironically, there are some slight variations within the realm of science as well. For instance, Stephen Hawking writes:

In order to talk about the nature of the universe and to discuss questions such as whether it has a beginning or an end, you have to be clear about what a scientific theory is. I shall take the simple-minded view that a theory is just a model of the universe, or a restricted part of it, and a set of rules that relate quantities in the model to observations that we make. It exists only in our minds and does not have any other reality (whatever that might mean). A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations (Hawking, 1988, p. 9).
Under this definition, a scientific theory is nothing more than a mathematical model that cannot be confused with reality. This is seen even more clearly in Victor J. Stenger’s statement:

The exact relationship between the elements of scientific models and whatever true reality lies out there is not of major concern. When scientists have a model that describes the data, that is consistent with other established models, and that can be put to practical use, what else do they need? (Stenger, 2007, pp. 228-229)
Still, there are guidelines for appropriate models. The key elements to this kind of model theory is that there must be a minimum amount of arbitrary requirements for the theory, it must be widespread and not confined to a specific, ad hoc arena, and it must have the ability to make predictions.

As I said, there is some minor variation among scientists as to what constitutes a theory. An example of such a variation is:

A scientific theory is a concise and coherent set of concepts, claims, and laws (frequently expressed mathematically) that can be used to precisely and accurately explain and predict natural phenomena (Ben-Ari, 2005, p. 24).
We see in the above that this definition disagrees slightly with the definition provided by Hawking, and also comes against Stenger’s view, in that scientific theories are used to “explain and predict natural phenomena” rather than being only mathematical models that don’t need to relate to the natural world. In agreement, however, the new definition stipulates theories should be concise and coherent, which means that the theories should be short (the shorter the better) and should cohere to as much of reality as possible (with the ideal being a universal theory with absolutely no exceptions requiring arbitrary additions to the theory).

Adding to Hawking, this definition maintains that a theory should be precise and accurate. While these two terms are sometimes used synonymously in the vernacular, there is a difference between being precise and being accurate. Accuracy refers to how “correct” a theory is. This often can only be judged by viewing whether the theory successfully predicted or explained some event. Precision, on the other hand, deals with the exactness of a measurement. Suppose that the distance between a man’s house and the curb at the end of his driveway is exactly 43 feet 3 inches. If the man measures the distance using a stick that’s exactly one yard long, he gets the result of 14 yards (42 feet) plus a little bit more. If he measures the same distance with a stick that’s one foot long, he can tell what some of that “little bit more” is and gets the result of 43 feet, with just a small fraction left over now. The second measurement is more precise than the first because it is closer to the actual distance of 43 feet 3 inches, so a foot-long stick is more precise than a yard-long stick. While the first measurement was off by a foot and three inches, the second is only off by three inches.[2]

Finally, a scientific theory should be able to make explanations and/or predictions. These two concepts are actually fairly closely linked. If one is able to explain an event, one ought to be able to predict when the event will occur. However, it is important to note that simply because a theory accurately predicts an event does not mean the theory is actually true. The theory could have accidentally predicted something accurately. As Hawking notes:

Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of experiments agree with some theory, you can never be sure that the next time the result will not contradict the theory. On the other hand, you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory. As philosopher of science Karl Popper has emphasized, a good theory is characterized by the fact that it makes a number of predictions that could in principle be disproved or falsified by observation. Each time new experiments are observed to agree with the predictions the theory survives, and our confidence in it is increased; but if ever a new observation is found to disagree, we have to abandon or modify the theory. At least that is what is supposed to happen, but you can always question the competence of the person who carried out the observation (Hawking, 1988, p. 10).
The provisional nature of scientific theories cannot be overstated. In fact, the history of science is filled with theories that have been discarded, from phlogiston to Ptolemy’s view of the solar system. Ironically, even some of the theories that we still use today have been shown wrong (or at least incomplete). The greatest example of this is Newton’s Law of Gravity, which was replaced by Einstein’s General Relativity, yet which is still used in physics today because Newton’s calculations are easier to handle and the discrepancies are not that great at small speeds and low mass. Still, Newton’s theory, no matter how useful, has been discarded in the ultimate sense. Science has moved on to other theories, each of them likewise held provisionally.

Unfortunately, the provisional nature of scientific theories is sometimes lost on scientists. Sometimes, a scientist can “fall in love” with his theory to such an extent that he will refuse to abandon it even after the theory has been demonstrated wrong. This result is what Thomas Kuhn documented when he showed that paradigm shifts are necessary in science. In essence, new theories do not take over until after the vast majority of practitioners with the old theories pass away. Only then do the new theories, held by new scientists who did not have the investment in the old theory, come to play. Even Einstein fell into this trap when he refused to acknowledge the validity of Quantum Mechanics despite the fact that a large portion of quantum theory came as a direct result of Einstein’s own theories!

So, to summarize what we have discussed so far, scientists begin with observations, they move on to making theories (which are concise and coherent, accurate and precise methods of explaining or predicting physical phenomena, the truth of which is held provisionally), then scientists focus on a prediction from the theory, conduct an experiment on that prediction, and observe the results, which leads us back into the loop.

All of this seems perfectly satisfactory for providing an explanation for what science is. But if you are a student of logic, you might not be as satisfied with the above. After all, the loop from observation to theory to prediction to experiment and back again seems like it could easily fall prey to circular reasoning. And indeed, this is the very reason why Ben-Moti called this method the naïve inductive-deductive method. The word “naïve” at the front of the description gives us warning to the problem.

To state the problem, let’s ask a few questions. How is it that a scientist knows what to observe in the first place? How does he know whether the experiment itself will be affected by what he is trying to discover? To use a specific example:

In 1888 when Heinrich Hertz (1857-1894) was attempting to produce the first radio waves, he did not think that the size of his lab or the color of the paint on its walls were relevant to his experiments; he knew from James Clerk Maxwell's (1831-1879) theory of electromagnetism that radio waves were likely to exist, but he could not know that—while the color of the paint was not significant—the size of the lab was because of echoes from the walls (Ben-Ari, 2005, pp. 6-7).
It is impossible for the scientist to know these things in advance. However, he can make assumptions—and indeed, he must do so. Because of this, scientific observations are heavily theory-laden. This means that a framework for interpreting observations must exist before the scientist can begin to know if he has even observed something in the first place. But the immediate question is: how do we know if what the scientist observes is accurate anyway instead of (to use the phrase of Jack Cohen and Ian Stewart) mere “brain puns”?

The danger is that what we think of as laws may be just patterns that we somehow impose upon nature, like the animal shapes we can choose to see in clouds. Our treasured fundamental laws may just be odd features of nature that happen to appeal to the human mind. If so, then much of nature may be functioning according to processes that we cannot comprehend, and consequences derived from our imaginary laws may bear no resemblance to nature at all (Cohen & Stewart, 1994, p. 22).

So are the patterns that we profess to detect in nature brain puns or genuine laws? The verdict is not yet in, but they could be puns. In recent years a fecund mathematics has generated innumerable “new” mental images, such as catastrophes, chaos, fractals, that might be advance warning of new simplicities in the world. Each extends the list of patterns that we can name, recognize, and manipulate. It is not clear that all such patterns must necessarily prove operationally congruent to reality. They may describe games that mathematicians play, but that have nothing to do with the world outside human brains (Cohen & Stewart, 1994, p. 26).
Given only the scientific method to go with, it is impossible to know whether our observations really are reflective of reality, or if they are brain puns. Instead, we must assume that our observations are correct, which leads us into circular reasoning:

Clearly, science must start with observation, but once some initial observations have been made, a circular process takes place. Observations lead to theories, which guide further observations, which influence the theories. The presentation of the process of science as initially and primarily inductive is so oversimplified as to be useless. There are serendipitous discoveries in science, in which observations truly instigate the development of theories, but they inevitably occur to those who have the necessary framework within which to understand the importance of what they are observing (Ben-Ari, 2005, p. 8).
While Ben-Ari in the above does not dwell in great detail on the fact that science contains “a circular process” in the methodology, it is important for us now. Circular reasoning is a logical fallacy rendering arguments invalid [3]. Because of this, it might be tempting to say that the conclusions of science are illogical and therefore we shouldn’t trust the scientific method at all. However, it is important to note that all arguments assume their axioms, and therefore all arguments are equally circular at the foundational level. This is due to the fact that axioms must be assumed; they cannot be proven for if they could be proven they would not be axioms.

It is foolish, therefore, to arbitrarily decree that since the scientific method employs a degree of circularity it is completely invalid. However, the fact that the method has this circularity in it (as well as the fact that theories are always held provisionally) requires us to pause before asserting that things discovered by science are synonymous with truth. In fact, Larry Laudan maintains that science cannot actually know truth since

[t]he classical sceptic argument against science, repeated by Laudan (1984a), is that knowing the truth is a utopian task. Kant’s answer to this argument was to regard truth as a regulative principle for science. Charles S. Peirce, the founder of American pragmatism, argued that the access to the truth as the ideal limit of scientific inquiry is “destined” or guaranteed in an “indefinite” community of investigators…. However, there does not seem to be any reason to think that truth is generally accessible in this strong sense (Niiniluoto, 2007).
The upshot is that if there is no actual way to determine the truth via science then logically a scientific theory can be completely valid scientifically and yet still be false. Conversely, a theory can be completely invalid from a scientific perspective yet be true. The scientific method thus becomes all the more reliant on a supporting framework to do the “grunt work” of establishing the truth-value of science. If the framework brings us to the truth, then the circularity employed by the scientific method is harmless. But if the framework brings us to error, the fact that the scientific method is chained to this framework means that the scientific method will lead us to error every single time it is used. Naturally, the question arises: what is the framework that science uses?

If you remember in the definition of a scientific theory that we provided earlier, the theory is required to be about “natural phenomena.” This provides us with the framework used by the vast majority of scientists: philosophical naturalism. Sometimes, naturalism is also referred to as materialism since both naturalism and materialism teach that only the natural (or material) is knowable; the supernatural (or immaterial) is not. While some maintain differences between naturalism and materialism, such do not concern us here and we can treat both words synonymously.

One might first be tempted to ask whether science itself requires a naturalistic or materialistic framework to rest upon. Because naturalism has been included in the very definition of a “scientific theory,” many scientists believe that it is a requirement of science. That is, they claim it is impossible for science to exist as science in any realm other than the naturalistic realm.

But simply defining naturalism into science isn’t very appealing. Furthermore, occasionally some scientists address the philosophical issues in a more realistic manner. One such scientist was Richard Lewontin who, in a review of Carl Sagan’s book The Demon-Haunted World, wrote the following:


Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen (Lewontin, 1997).
Lewontin almost certainly regretted penning this paragraph once it was seized upon by several Christian apologetics organizations and repeated widely around the Internet. However, the validity of Lewontin’s statements cannot be denied. If we are to define science as primarily materialistic or naturalistic, it is only because our presupposed framework compels us to create science in that manner. Science itself does not require a materialistic framework. Instead, the materialistic framework creates the scientific apparatus.

It is important to keep this order in mind. If a scientist argues that science cannot deal with the supernatural, that it is limited to examining only the natural, this statement is correct but only in a trivial sense. It is limited to that extent because the framework presupposes materialism, not because the scientific method could not be extended to include supernatural frameworks too.

Naturally, Lewontin would disagree that science could extend to the supernatural, for he states quite forcefully that “anyone who could believe in God could believe in anything.” It is somewhat ironic, however, to read that line immediately after reading how one must accept the “extravagant”, “counter-intuitive”, and “mystifying” claims of materialistic science. In what sense is it that the theist is the one who “could believe in anything” when compared to this?

But Lewontin is not the only scientist to make this claim about supernaturalism. Indeed, it is common amongst many scientists (although primarily atheistic scientists) to claim that an appeal to the supernatural is an appeal to “anything.” For instance, Ben-Ari writes:


The variety of supernatural explanations is immense and they can be used to explain the occurrence of any phenomenon. A drought must have been caused by the anger of a god disillusioned with the evil actions of the residents of a region, and a disease must have been caused by the sins of the individual. The problem with supernatural explanations is that they are vacuous. A supernatural entity can be used to explain anything, both a phenomenon and its absence, so it lacks any explanatory or predictive content (Ben-Ari, 2005, p. 29).
Naturally, this is only so if one ignores the framework of each individual supernatural position. That there are a multitude of supernatural beliefs does not mean that each supernatural belief would result in the above characterization. In fact, it is most likely (given the pluralistic and omnibenevolent nature of most supernatural religious beliefs) that there are more supernatural views that would disagree with Ben-Ari’s claims that one could attribute a drought to the anger of a disillusioned god then there are supernatural views that would agree with his example. Furthermore, this characterization of the supernatural ignores the ability of a believer in the supernatural to hold to God as a “prime cause” using “secondary causes” throughout nature.

However, this particular detail of the philosophy of science is not very relevant to our current work. For the purposes of this work, we will agree for the sake of argument that science is naturalistic. This is not to agree that it actually is, of course; the agreement is simply due to the fact that there are more pressing concerns at this point.

Because science is considered naturalistic, it is important to bring up another qualification of what science is. Or in this case, what science is not. Science declares that nature is not teleological. This rather ominous looking word is actually a very simple one to define. Teleology is the study of design (literally “purpose”, from the Greek telos), and a teleological realm would be a designed realm. It is important that we remember that naturalistic science is explicitly not teleological, and this importance is not lost upon naturalistic scientists either, as demonstrated below:


Modern science explicitly and emphatically rejects teleology. Physics can describe the trajectory of a falling stone in great detail, but it never attempts to attribute desire or purpose to stones. Biology can describe the evolutionary processes that brought our species Homo sapiens onto the face of the Earth, but it has nothing to say about why we are here, nor even if our existence has any purpose whatsoever. Nevertheless, evocative teleological terminology is often used, deepening the confusion of what science is all about. For example, a biologist might say that a species has adapted to an environmental niche, implying that the species decided to adapt or strove to adapt. Of course, science claims nothing of the sort. Adaptation is simply the outcome of a process of reproduction amid competition and does not require a decision or intention on the part of any member of the species (Ben-Ari, 2005, p. 24).

Teleological explanations have also been rejected by modern science, which seeks to describe the structure and functioning of the universe, without attributing purpose or volition to natural objects (Ben-Ari, 2005, p. 29).
The rejection of teleology goes hand-in-hand with a naturalistic worldview, one that does not view the universe as having been created with a purpose or designed in any manner. It is therefore not surprising that science is so adamant in its rejection of teleological explanations. What is not quite so easy to understand, however, is why scientists continually slip into teleological claims, as we shall see later on in this work, despite their vehemence against teleology.

We now have a very stable understanding of how the scientific method is defined. However, if you read the dialogue between Achilles and Tortoise, there were two other aspects that need to be looked at. The first is the argument from authority, and the second is the argument from consensus.

Arguments from authority are often the easiest of arguments to fall in to. This occurs whenever an individual person becomes the arbiter of truth for science. A simple example of this fallacy might be to say that because Einstein rejected quantum theory, we ought to reject it too.

Science however has always been anti-authoritative. For instance, Henry Gee, a British paleontologist, wrote the following describing his attitude when he first began to conduct research:


[M]y summer work in the Fossil Fish Section often forced me, a complete beginner, to make decisions about taxonomy: I had to reclassify specimens of pteraspid fishes, renaming them according to my reading of Alain Blieck’s thesis. I had to write out new labels and shuffle the entries that each fossil had in the museum card index. On one occasion I had a crisis of confidence. What right had I, a novice who had done no serious work on fossils, to rearrange the national collection? I took my worries to Peter Forey. ‘Don’t worry about it’, he counseled: ‘taxonomy is only a matter of opinion’. The implication was that my opinion counted; it was as valid as the opinion of qualified scientists such as Patterson, Rosen, Gardiner, or Forey (Gee, 1999, p. 154).
Further, we read from Carl Sagan’s The Demon-Haunted World (the work that Lewontin reviewed):

One of the great commandments of science is, “Mistrust arguments from authority.” (Scientists, being primates, and thus given to dominance hierarchies, of course do not always follow this commandment.) Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else. This independence of science, its occasional unwillingness to accept conventional wisdom, makes it dangerous to doctrines less self-critical, or with pretensions to certitude (Sagan, 1996, p. 28).
Consensus is a slightly different issue than authority, however. While every scientist will strive to reject arguments from authority (except when they “do not always follow this commandment” as Sagan points out), many scientists flock toward the idea of consensus. This is due in part to the arguments that Tortoise used: there is a statistical advantage in presenting your work to as wide a body as possible, and because individuals can err more easily than a group, consensus will tend toward the truth.

However, this concept is likewise disputed. In fact, one of the biggest problems with consensus is that it has the capability of enforcing the perceived dogmas rather than leading one toward the truth. Instead of errors being corrected, consensus can often force errors to remain firmly entrenched because it is “unscientific” to question these errors.

Michael Crichton, in a speech at Caltech, said:


Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period (Crichton, 2003).
Crichton then listed some of the many times that scientific consensus has been wrong. This includes germ theory, which saved literally millions of lives but only after several hundred years of dispute before overturning the consensus, and the like. To add to Crichton, we can include the following observations by Leslie Alan Horvitz:


Something similar happened when Alfred Wegener, an astronomer and meteorologist, proposed his theory of continental drift—the idea that the continents were once all joined together and, over the eons, drifted slowly apart until they reached their present locations—was greeted with derision from geologists in the early twentieth century; today, however, his theory is the foundation of contemporary geology. Einstein’s groundbreaking theories of gravity and light also received a skeptical reception when he proposed them, but both have been confirmed repeatedly in rigorous scientific studies in the years since. Any physics textbook that doesn’t include them would be next to useless (Horvitz, 2002, pp. 4-5).
Crichton concluded the section of his speech dealing with consensus by pointing out the obvious: claims of consensus are only invoked when the science is not solid enough to stand on its own. No one argues, for example, that the sun is 93 million miles away on the basis of consensus. As Crichton says: “It would never occur to anyone to speak that way.” Furthermore, no one argues that Einstein’s claims are right due to consensus. As Horvitz pointed out above, Einstein’s theories “have been confirmed repeatedly in rigorous scientific studies.” Why appeal to consensus when you have scientific studies to fall back on?

Furthermore, to claim that science is consensus is to paint yourself into a circular corner. The consensus of scientists defines what science is, yet the science they define is what is supposed to define the scientists as scientists too. (This vicious circle was pointed out by Achilles, rendering Tortoises’ claims void.)

So while consensus sounds nice at first glance, it is indeed unnecessary to science. Science only requires adherence to reality, and that can be done by a single person even if the consensus is wrong.

Finally, suppose that we use the methods we’ve described above and we come to two competing theories that are both valid under the scientific method, yet are contradictory to each other. How do we decide between these two competing theories? Scientists have a simple rule for this sort of thing. It’s called parsimony, which is a rather non-parsimonious word to describe the process of picking the simplest theory. For example, Gee writes of cladograms:


By convention, cladists choose, as a provisional hypothesis, the most parsimonious solution: the cladogram that requires the fewest evolutionary events to support its topology—in other words, the one that assumes the smallest amount of convergence. Of course, there is no law that says that evolution is always parsimonious. However, in a world in which it is very difficult, and often impossible, to decide whether similarity reflects common ancestry or convergence, it is pragmatic to adopt solutions in which convergence is minimal and start from there. Such solutions are no more than working hypotheses, subject to test, revision—even upset—in the light of subsequent evidence (Gee, 1999, p. 185).
What is true for Gee and his cladograms is also true for the rest of science. The concept of parsimony (often described as using Occam’s Razor) is a shortcut for scientists when weighing two competing theories. It should be noted, as Gee does, that our acceptance of the most parsimonious theory does not mean that this is, in fact, the way things happened [4].

Indeed, common experience tells us that sometimes events occur that are not parsimonious. When a parent comes home and finds a broken lamp next to her son (who is conveniently holding a baseball bat), the simplest explanation would be that her son broke the lamp; but the reality is that the plumber who had come over to fix the leaky sink had accidentally broken the lamp, and his explanation (along with the payment for the damages) demonstrates the truth. Similarly, when it comes to some specific scientific theories, the most parsimonious theory (the theory that is supposedly more likely) may in fact be the theory that is rejected because we observe an event occur more complexly than the simplest explanation would have it.

It is most certainly true that when there are two competing theories, the simplest theory is the likelier of the two to be accurate. But the theory that is not the simplest still has a chance, however slight, to be correct. When we add up the sheer number of these theories, statistics tells us that there must be some instances of the non-parsimonious theory being the correct one. In fact, it is much more likely that at least some of the non-parsimonious theories are correct than it is that every single one of them is, in fact, wrong.

Yet science, by convention, always chooses the parsimonious theory over the complex theory (and only violates this for specific reasons). This means that statistically speaking, we know for a fact that scientists will sometimes choose the wrong theory. Unfortunately, it is impossible for us to determine which time this occurs. After all, the reason we need to use the concept of parsimony in the first place is because there is no other way to tell which of two competing theories is correct. They both fit the current evidence, and this is why we need something outside that evidence to determine which one we should accept.

As a result, we know that science will err at some point when it always chooses the parsimonious path, but we are unable to tell when those errors occur. We know that they must be there, but it is impossible for us to tell where they are or to rid us of them.

So let us summarize what we have discovered about the science. Science is a method of investigation based on a framework of naturalism wherein theory-laden observations are made, theories are composed based on these observations (along with the rules we discussed for scientific theories), experiments are conducted to test these theories, and the results are repeated. These results can never be known for certain, but must only be held provisionally. Any individual can do science, for there are no appeals to authority within science. Likewise, while consensus might be nice, it is certainly not necessary for science, and in fact can sometimes enforce an erroneous orthodoxy rather than allow new, truthful ideas to come into play. Finally, because there can often be competing theories that are both equally supported by the evidence, science always comes down on the side of the simplest theory. This is usually correct, but we also know that there will be times picking in this manner will be wrong.

Seen in this light, science doesn’t seem quite as “perfect” as it was imagined to be. Science has limitations, not the least of which is the circularity of important aspects and the necessity of science to rely upon a specific, unproven framework. However, given the fact that science has produced many tangible results (especially in the form of technology), there must be some aspect to the method that works despite these shortcomings. Science, while flawed, is still incredibly useful, and it would be wrong for even the most extreme of supernatural fundamentalists to wage all-out war against science.

Notes:


1. Although Richard Morris disagrees, stating: “There is no scientific method. Scientists, and especially physicists, make use of any method that will work (Morris, 1999, p. 7)” (emphasis added).

2. In scientific measurements, you can tell the precision of the measurement by the number of digits after the decimal point (including exponential notation). For example, if we have a measurement of 10 meters, we do not know if it’s really 10.3 meters rounded to the nearest 10. If a measurement is 10.0, we have a more exact measurement (although, of course, now we do not know if the measurement was rounded from 10.04, etc.). Thus, every number after the decimal gives us more precision. Finally, when using measurements in a scientific formula, the answer can only be as precise as the least-precise answer. Thus, a simple measurement of velocity (defined as distance divided by time) where the distance is measured is 10.0000 meters in 10.0 seconds, the velocity is 1.0 meters per second instead of 1.0000 meters per second, because the precision of time is only to the first decimal place.

3. Note that by arguments we do not refer to verbal disagreements between individuals. Rather, we use the term “argument” to refer to one (or more) statement(s) that can be examined logically. As such, scientific statements would qualify as logical arguments.

4. Or, as David M. Raup said: “In my experience, about as many people say, ‘Scientific problems rarely have simple answers,’ as say, ‘Where there is a choice, simple explanations are most likely to be correct.’ Both statements are rhetorical rather than analytical, and one hates to see them used as arguments for or against a theory (Raup, 1991, pp. 92-93)”.

Bibliography


Ben-Ari, M. (2005). Just A Theory: Exploring the Nature of Science. Amherst, NY: Prometheus Books.

Cohen, J., & Stewart, I. (1994). The Collapse of Chaos. New York, NY, USA: Viking.

Crichton, M. (2003, January 17). Aliens Cause Global Warming. Retrieved August 16, 2007, from Michael Crichton: The Official Site: http://www.crichton-official.com/speech-alienscauseglobalwarming.html

Gee, H. (1999). In Search of Deep Time. Ithaca, NY: Comstock Publishing Associates.

Hawking, S. (1988). A Brief History of Time. New York: Bantaam Books.

Horvitz, L. A. (2002). The Complete Idiots Guide to Evolution. Indianapolis: Alpha Books.

Lewontin, R. (1997). "Billions and billions of demons". The New York Book Review .

Morris, R. (1999). The Universe, the Eleventh Dimension, and Everything: What We Know and How We Know It. New York: Four Walls Eight Windows.

Niiniluoto, I. (2007, Spring). Scientific Progress. (E. N. Zalta, Editor) Retrieved 2007, from The Stanford Encyclopedia of Philosophy: http://plato.stanford.edu/archives/spr2007/entries/scientific-progress/

Raup, D. M. (1991). Extinction: Bad Genes or Bad Luck? New York: W. W. Norton & Company, Inc.

Sagan, C. (1996). The Demon-Haunted World. New York: Ballantine Books.

Stenger, V. J. (2007). God, The Failed Hypothesis. Amherst, NY: Prometheus Books.

3 comments:

  1. "Unfortunately, the provisional nature of scientific theories is sometimes lost on scientists. Sometimes, a scientist can “fall in love” with his theory to such an extent that he will refuse to abandon it even after the theory has been demonstrated wrong."

    Sounds like some theologians.


    I'll admit I skimmed this due to its length, but from what I gathered, this is a pretty good treatment of the scientific method according to naturalist presuppositions. I majored in physics and what I understand is that the scientific method differs somewhat based on the type of science one does. For example, many strains of biology tend to be observationally oriented while physics is almost exclusively theoretically oriented. That is to say, a biologist cataloging species will do so solely by virtue of observation. They don't theorize on what species there might be and then go see if that species exists. Physicists formulaically theorize physical properties based on the theories that have already been empirically tested. Then they test their theoretical formulae to see how accurate they are. Refinements in theories are often made from empirical patterns of deviation from their formulae.

    What all of these have in common is the base logic that drives the scientific method is the inductive verification of the premises of a deductive relationship such that one supposes the likelihood of equation from a conditional syllogism, which otherwise would be deductively known as affirming the consequent. Therefore, science can disprove something by virtue of modus tollens, but can never absolutely prove a conclusion to be true. It can only demonstrate the likelihood of the logical relationship.

    ReplyDelete
  2. Jim,

    You're actually touching on one of my upcoming points :-) There *IS* a difference between types of sciences. That is, not all science is equally scientific. For instance, a chemist would never be allowed to get away with the kind of speculation that goes rampant in the "soft" sciences, like paleaontology (where you can wildly speculate about the climate an organism lived in when you have three vertebrae and a claw that *may* belong to the same organism).

    ReplyDelete
  3. My nascent impression is that working scientists, for the most part, tend to ignore the philosophy of science discussions. They'd rather do Science than talk about meta-implications and underpinnings of Science.

    But, of course, there are some exceptional scientists who are very well-versed and conversant with Philosophy of Science issues.

    Excellent post, Peter.

    ReplyDelete