Showing posts with label Metascience. Show all posts
Showing posts with label Metascience. Show all posts

Thursday, February 16, 2017

Reductionism

From John Lennox in his book God's Undertaker: Has Science Buried God? (pp 52-5):

The great mathematician David Hilbert, spurred on by the singular achievements of mathematical compression, thought that the reductionist programme of mathematics could be carried out to such an extent that in the end all of mathematics could be compressed into a collection of formal statements in a finite set of symbols together with a finite set of axioms and rules of inference. It was a seductive thought with the ultimate in 'bottom-up' explanation as the glittering prize. Mathematics, if Hilbert's Programme were to succeed, would henceforth be reduced to a set of written marks that could be manipulated according to prescribed rules without any attention being paid to the applications that would give 'significance' to those marks. In particular, the truth or falsity of any given string of symbols would be decided by some general algorithmic process. The hunt was on to solve the so-called Entscheidungsproblem by finding that general decision procedure.

Experience suggested to Hilbert and others that the Entscheidungsproblem would be solved positively. But their intuition proved wrong. In 1931 the Austrian mathematician Kurt Gödel published a paper entitled 'On Formally Undecidable Propositions of Principia Mathematica and Related Systems'. His paper, though only twenty-five pages long, caused the mathematical equivalent of an earthquake whose reverberations are still palpable. For Gödel had actually proved that Hilbert's Programme was doomed in that it was unrealizable. In a piece of mathematics that stands as an intellectual tour-de-force of the first magnitude, Gödel demonstrated that the arithmetic with which we are all familiar is incomplete: that is, in any system that has a finite set of axioms and rules of inference and which is large enough to contain ordinary arithmetic, there are always true statements of the system that cannot be proved on the basis of that set of axioms and those rules of inference. This result is known as Gödel's First Incompleteness Theorem.

Now Hilbert's Programme also aimed to prove the essential consistency of his formulation of mathematics as a formal system. Gödel, in his Second Incompleteness Theorem, shattered that hope as well. He proved that one of the statements that cannot be proved in a sufficiently strong formal system is the consistency of the system itself. In other words, if arithmetic is consistent then that fact is one of the things that cannot be proved in the system. It is something that we can only believe on the basis of the evidence, or by appeal to higher axioms. This has been succinctly summarized by saying that if a religion is something whose foundations are based on faith, then mathematics is the only religion that can prove it is a religion!

In informal terms, as the British-born American physicist and mathematician Freeman Dyson puts it, 'Gödel proved that in mathematics the whole is always greater than the sum of the parts'. Thus there is a limit to reductionism. Therefore, Peter Atkins' statement, cited earlier, that 'the only grounds for supposing that reductionism will fail are pessimism in the minds of the scientists and fear in the minds of the religious' is simply incorrect.

That there are limits for reductionism in science itself is borne out by the history of science, which teaches us that it is important to balance our justifiable enthusiasm for reductionism by bearing in mind that there may well be (and usually is) more to a given whole than simply what we obtain by adding up all that we have learned from the parts. Studying all the parts of a watch separately will not necessarily enable you to grasp how the complete watch works as an integrated whole. There is more to water than we can readily see by investigating separately the hydrogen and oxygen of which it is composed. There are many composite systems in which understanding the individual parts of the system may well be simply impossible without an understanding of the system as a whole – the living cell, for instance.

Besides methodological reductionism, there are two further important types of reductionism: epistemological and ontological. Epistemological reductionism is the view that higher level phenomena can be explained by processes at a lower level. The strong epistemological reductionist thesis is that such 'bottom-up' explanations can always be achieved without remainder. That is, chemistry can ultimately be explained by physics; biochemistry by chemistry; biology by biochemistry; psychology by biology; sociology by brain science; and theology by sociology. As the Nobel Prize-winning molecular biologist Francis Crick puts it: The ultimate aim of the modern development in biology is, in fact, to explain all biology in terms of physics and chemistry.'

This view is shared by Richard Dawkins. 'My task is to explain elephants, and the world of complex things, in terms of the simple things that physicists either understand, or are working on.' Leaving aside for the moment the very questionable assertion to which we must return below that the subject matter of physics is simple (think of quantum mechanics, quantum electrodynamics or string theory), the ultimate goal of such reductionism is evidently to reduce all human behaviour – our likes and dislikes, the entire mental landscape of our lives – to physics. This view is often called 'physicalism', a particularly strong form of materialism. It is not, however, a view which commends universal support, and that for very good reasons. As Karl Popper points out: 'There is almost always an unresolved residue left by even the most successful attempts at reduction.'

Scientist and philosopher Michael Polanyi helps us see why it is intrinsically implausible to expect epistemological reductionism to work in every circumstance. He asks us to think of the various levels of process involved in constructing an office building with bricks. First of all there is the process of extracting the raw materials out of which the bricks have to be made. Then there are the successively higher levels of making the bricks – they do not make themselves; brick-laying – the bricks do not 'self-assemble'; designing the building – it does not design itself; and planning the town in which the building is to be built – it does not organize itself. Each level has its own rules. The laws of physics and chemistry govern the raw material of the bricks; technology prescribes the art of brick-making; brick-layers lay the bricks as directed by the builders; architecture teaches the builders; and the architects are controlled by the town planners. Each level is controlled by the level above. But the reverse is not true. The laws of a higher level cannot be derived from the laws of a lower level – although what can be done at a higher level will, of course, depend on the lower levels. For example, if the bricks are not strong there will be a limit on the height of the building that can safely be built with them.

Or take another example, quite literally to your hand at this moment. Consider the page you are reading just now. It consists of paper imprinted with ink (or perhaps it is a series of dots on the computer screen in front of you). It is surely obvious that the physics and chemistry of ink and paper (or pixels on a computer monitor) can never, even in principle, tell you anything about the significance of the shapes of the letters on the page; and this has nothing to do with the fact that physics and chemistry are not yet sufficiently advanced to deal with this question. Even if we allow these sciences another 1,000 years of development it will make no difference, because the shapes of those letters demand a totally new and higher level of explanation than physics and chemistry are capable of giving. In fact, complete explanation can only be given in terms of the higher level concepts of language and authorship, the communication of a message by a person. The ink and paper are carriers of the message, but the message certainly does not arise automatically from them. Furthermore, when it comes to language itself, there is again a sequence of levels. You cannot derive a vocabulary from phonetics, or the grammar of a language from its vocabulary, etc.

As is well known, the genetic material DNA carries information. We shall describe this later on in some detail; but the basic idea is that DNA can be thought of as a long tape on which there is a string of letters written in a four-letter chemical language. The sequence of letters contains coded instructions (information) that the cell uses to make proteins. But the order of the sequence is not generated by the chemistry of the base letters.

In each of the situations described above, we have a series of levels, each higher than the previous one. What happens on a higher level is not completely derivable from what happens on the level beneath it. In this situation it is sometimes said that the higher level phenomena 'emerge' from the lower level. Unfortunately, however, the word 'emerge' is easily misunderstood, and even misleadingly misused, to mean that the higher level properties arise automatically from the lower level properties without any further input of information or organization – just as the higher level properties of water emerge from combining oxygen and hydrogen. However, this is clearly false in general, as we showed earlier by considering building and writing on paper. The building does not emerge from the bricks nor the writing from the paper and ink without the injection of both energy and intelligent activity.

Thursday, August 15, 2013

Scientism is not science

Bill Vallicella's post taking apart scientism is worth a read.

(As an aside, an appropriately conducted randomized controlled trial (RCT) would fit the "five characteristics of science in the strict and eminent sense." I haven't read the article Vallicella cites on these characteristics, but perhaps this is part of what's in mind as well.)

Tuesday, March 12, 2013

Empirically equivalent

The following is from Alister McGrath's The Dawkins Delusion? Atheist Fundamentalism and the Denial of the Divine (pp 36-37):

[L]et's consider a statement made by Dawkins in his first work, The Selfish Gene.

[Genes] swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence.

We see here a powerful and influential interpretation of a basic scientific concept. But are these strongly interpretative statements themselves actually scientific?

To appreciate the issue, consider the following rewriting of this paragraph by the celebrated Oxford physiologist and systems biologist Denis Noble. What is proven empirical fact is retained; what is interpretative has been changed, this time offering a somewhat different reading of things.

[Genes] are trapped in huge colonies, locked inside highly intelligent beings, moulded by the outside world, communicating with it by complex processes, through which, blindly as if by magic, function emerges. They are in you and me; we are the system that allows their code to be read; and their preservation is totally dependent on the joy that we experience in reproducing ourselves. We are the ultimate rationale for their existence.

Dawkins and Noble see things in completely different ways...They simply cannot both be right. Both smuggle in a series of quite different value judgments and metaphysical statements. Yet their statements are "empirically equivalent." In other words, they both have equally good grounding in observation and experimental evidence. So which is right? Which is more scientific? How could we decide which is to be preferred on scientific grounds? As Noble observes — and Dawkins concurs — "no-one seems to be able to think of an experiment that would detect an empirical difference between them."

Monday, March 11, 2013

Madness in the method

How does science work? How is science done?

Of course, a term like "science" is itself hard to pin down.

But leaving that aside, many people believe science is done solely, predominantly, optimally, or fundamentally through the scientific method.

In fact, some take hold of this idea (or something like it) to then use as part and parcel of their argument for metaphysical naturalism. For example, that's what happened in Steve's recent debate with an ex-Christian commenter named Ryan (1 | 2 | 3 | 4).

But is the traditional scientific method the only, main, best, or most essential way to discover "scientific" truths about nature or creation? By contrast, physicist Gregory N. Derry's chapter titled "A Bird's Eye View: The Many Routes to Scientific Discovery" (pdf) in his book What Science Is and How It Works presents other lines of inquiry.

Derry lists five different ways scientists have discovered new knowledge:

  1. Serendipity and Methodological Work: Roentgen's Discovery of X-rays
  2. Detailed Background and Dreamlike Vision: Kekulé's Discovery of the Structure of Benzene
  3. Idealized Models and Mathematical Calculations: The Discovery of Band Structure in Solids
  4. Exploration and Observation: Alexander von Humboldt and the Biogeography of Ecosystems
  5. The Hypothetico-Deductive Method: Edward Jenner and the Discovery of Smallpox Vaccine

Saturday, October 27, 2012

The limits of science

i) I think scientific realism is paradoxical. Here’s one reason. Scientific realism aims at providing an objective, third-person description of the world. Not only is that the aim, but that’s a presupposition.

However, science ultimately depends on observation. On the human observer. So underlying the third-person perspective is a first-person perspective. And it’s hard to see how science can bootstrap a third-person perspective from a first-person perspective.

ii) But the paradox runs even deeper. According to a scientific analysis of sensory perception, we don’t perceive the world directly. Rather, our perception of the world is mediated by various intervening processes. Physical objects generate sound waves, light waves, &c. That’s a form of coded energy or coded information. When that reaches our eyes, ears, and other sensory relays, that’s translated into different coded energy. Say, from electromagnetic signals to electrochemical signals.

The upshot is that my internal representation of the external world is coded information. I have a mental image of a tree. But if the scientific analysis of sensory perception is correct, then my mental representation isn’t a miniature image of the tree, but a coded analogue.

Yet if that’s the case, then there’s no reason to assume the mental representation resembles the external object, any more than musical notation resembles sound.

We tend to think of the eyes as cameras which take photographs of the outside world. The difference between the tree “out there” and my mental image is basically a difference in scale and dimensionality (i.e. a 2D image of a 3D object).

But it’s hard to see (pardon the pun) how a process of coding energy is likely to yield a readout that resembles the distal stimulus.

iii) And that’s not the end of the paradox. For we’re having to use sensory perception to analyze sensory perception. A circular procedure. So we can’t get behind the process to study the process apart from the process, for we are part of the very process we study! The percipient perceiving himself.

In a scientific analysis of sensory perception, we’re tacitly assuming a viewpoint independent of the observer. A viewpoint over and above the process. We imagine the tree “out there.” We imagine the tree generating light waves. We track the light waves as they impinge on the retina. We continue to trace the process from the outside into the brain.

But that’s an illusion. For the scientific analysis is ultimately on the receiving end of the process. Hence, we’re never in a position to retrace the process.

But in that event, the deceptively objective scientific description is even further removed from reality than appears to be the case.

So the conclusion circles back and falsifies the premise. That leaves us totally in the dark.

iv) And it’s truly insoluble given naturalism. Contrast that to Christian theism. If God made us, if God made the world, then I can understand how God could coordinate what the tree is really like, outside the observer, with the observer’s mental picture of the tree. God could design a process in which the output resembles the input.

But how would an unguided evolutionary process be able to compare what the tree is really like with our mental representation of the tree? There’s no overarching intelligence to compare the two in advance and create a chain-of-custody in which appearance and reality eventually match up.

v) Unbelievers argue for methodological naturalism on the grounds that leaving divine intervention out of the picture contributed to the tremendous progress and success of modern science and technology. Science continues to explain things that ignorant, superstitious folk used to explain by recourse to gods and demons.

From a historical standpoint, there may be a grain of truth to that portrayal, but I think it’s largely true of pagan polytheism. In polytheism, there is no unifying principle, no centralized command-and-control. Rather, you have a turf war between competing gods, who vary in their knowledge and power. Indeed, the gods themselves are the product of a cosmic process.

But in OT monotheism, there’s a single sovereign Creator God behind everything that happens. So everything is coordinated. God creates an order of second causes.

vi) Scientific realism also assumes or stipulates the uniformity of nature. And there’s a measure of truth to that. That’s somewhat analogous to divine providence. But according to providence, natural events are guided by a higher intelligence, unlike the uniformity of nature­–which is driven by mindless forces.

vii) In addition, from a Christian standpoint, historical causation includes factors like answered prayer and coincidence miracles.

These involve divine “intervention.” This type of “intervention” doesn’t necessarily “interrupt” the “natural” course of events. Not like jumping into the middle of things to change course. Rather, it’s more like a stacked deck where the cards were shuffled ahead of time to yield a specific, predetermined sequence of events. Viewed from the outside, it all looks perfectly “natural.” But there’s a higher intelligence directing the process behind-the-scenes to yield a particular conjunction of seemingly fortuitous events.

This is generally imperceptible, because the significance of the outcome is only meaningful to a particular individual in need. He recognizes how this outwardly ordinary event is extraordinarily opportune for him.

There’s no telling how often answered prayer or coincidence miracles are a driving force in history, for you have to be an insider to appreciate the answer or the “coincidence.” But these are “causes” no less than “natural” causes.

Sunday, September 23, 2012

Chronos and cloning

One of the staple objections to young-earth creationism is the allegation that God would be guilty of deception if he made a world that appeared to be younger than it really is. I’ve responded to this objection before, but now I’d like to consider the issue from another angle.

Medical science may well be nearing the point where it can clone replacement organs. Suppose you’re going blind due to glaucoma or macular degeneration. Or suppose you’re congenitally blind. In 10-20 years, science may be able to clone you a brand new pair of eyes. An eye transplant based on your own DNA.

Of course, normally, adult eyes take years to develop, from conception through gestation and maturation. Yet your “instant” eyes might only take a few weeks to clone in the laboratory. Would the medical profession be guilty of deception under those circumstances? Should we discourage the development of cloned replacement organs on ethical grounds because the cloned organs appear to be fully mature organs which only took a tiny fraction of the normal time to cultivate?

Saturday, July 28, 2012

Methodological self-refutation

The major reason unbelievers say they reject Gen 1 is because Gen 1 is said to be unscientific, or contrary to science. We know from modern cosmology, geology, botany, and zoology that that’s not how it happened.

But let’s hold that thought for a moment and compare that to another consideration. For many of the same unbelievers who reject Gen 1 on scientific grounds also subscribe to methodological naturalism. Here’s a representative statement of methodological naturalism:


There are two basic principles of science that creationism violates. First, science is an attempt to explain the natural world in terms of natural processes, not supernatural ones. This principle is sometimes referred to as methodological naturalism…Nonmaterial causes are disallowed.

When a creationist says, “God did it”, we can confidently say that he is not doing science. Scientists do not allow explanations that include supernatural or mystical powers for a very important reason. To explain something scientifically requires that we test explanations against the natural world. A common denominator for testing a scientific idea is to hold constant (“control”) at least some of the variables influencing what you are trying to explain. Testing can take many forms, and although the most familiar test is the direct experiment, there exist many research designs involving indirect experimentation, or natural or statistical control of variables.

Science’s concern for testing and control rules out supernatural causation. Supporters of the “God did it” argument hold that God is omnipotent. If there are omnipotent forces in the universe, by definition, it is impossible to hold their influences constant; one cannot “control” such powers. Lacking the possibility of control of supernatural forces, scientists forgo them in explanation. Only natural explanations are used. No one yet has invented a theometer, so we will just have to muddle along with material explanations.


For reasons I’ve given elsewhere, I think methodological naturalism is unscientific. But for the sake of argument, let’s play along with methodological naturalism.

If we take that methodology for granted, then what does it mean to say Gen 1 is unscientific? If would mean that things didn’t happen that way if you leave God out of the picture.

But this also means that if you do take God into account, then you’re in no position to say it didn’t happen that way. In fact, Eugenie Scott’s explicit justification for methodological naturalism is that If there are omnipotent forces in the universe, by definition, it is impossible to hold their influences constant; one cannot “control” such powers.

But in that event, she can’t rule out the possibility (or even probability) that Gen 1 is factual. Moreover, she can’t say Gen 1 has been falsified by the scientific evidence, for on her definition, scientific evidence can’t take divine agency into account. Therefore, it would be viciously circular for her to appeal to the scientific evidence against Gen 1 if, by definition, her method disallows supernatural causes. For in that case, she’s preemptively excluded potential counterevidence. By her own admission, allowing for the possibility of divine agency introduces uncontrollable variables into the process. But if science can’t make allowance for divine agency, then science can’t say what God would or would not have done in that situation. Indeed, on that definition, science can’t even say that divine agency is improbable in that situation. She’s disqualified science from making judgments about divine agency one way or the other. But that leaves the question open-ended.

Thursday, May 03, 2012

Folk epistemology

Larry Laudan
It was, as always, a delight reading Alex Rosenberg’s essay. It has the usual Rosenberg characteristics: trenchant, clear-headed and unafraid in its dismissal of various influential idols of the tribe. I have scarcely more use than he does for folk psychology or folk religion (although I think he might have shown a slightly more nuanced grasp of Sellars’ distinction between the ‘scientific image’ and the ‘manifest image’, rather than arguing for the wholesale repudiation of the latter).
What puzzles me about the piece is that Rosenberg grounds his scientism on what can only be regarded as a traditional thesis of ‘folk epistemology’. He is, unabashedly, a scientific realist. That realism rests on the quaint belief that, because scientific theories –at least the best of them– predict and explain a staggering range of phenomena, we do and should suppose that they are true. None of the principal arguments of his essay goes anywhere without this version of Putnam’s so-called miracles argument. Rosenberg makes his core epistemic thesis very explicit: “The reason we trust physics to be scientisms’ metaphysics is its track record of fantastically powerful explanation, prediction and technological application.” Can there be anyone (Rosenberg included) who believes that this core assumption is unproblematic?
Among the many troubles with the thesis that “science works so well that it must be true” is that working well, even working very well, is no guarantee that one has managed to cut the world at its joints or that these currently well-working beliefs won’t become casualties of the next scientific revolution. The history of science is a minefield littered with the remains of theories that once worked very well indeed (yes, even to the point of making surprising, precise predictions successfully) but eventually came unstuck, as they encountered one anomaly after another. Ptolemy’s astronomy, Newton’s physics, stable-continent geology, and classical chemical atomism are only a few examples of empirically theories that were strikingly successful until they eventually stumbled over grave anomalies.
Scientism tends to ignore this inconvenient historical fact. That is scarcely surprising since the success of false theories has to be rather unnerving for any project that is founded on the inference rule “X works so X must be true.” Worse, this particular piece of abductive inference is a prime example of precisely the sort of folk epistemology that a hard-headed skeptic like Rosenberg would ordinarily be scathing about. While he is (properly) keen on stressing how empirical research has established that one premise after another of folk psychology, folk psychology and folk biology is ill-founded, he acts as if our folk beliefs about empirical support (and let there be no mistake that the inference from apparent success to prima facie truth is deeply rooted in human doxastic practices) can be taken as largely if not wholly unproblematic.
There would thus appear to be a disconnect between the skepticism Rosenberg brings to most folk practices and his readiness to ground truth claims for science on what boils down to the fallacy of affirming the consequent.
So, Alex, cheer up; the situation looks as glum as you describe it only because you have made yourself hostage to a highly implausible and incorrigibly folk account of what it takes to establish the truth of a theory.


http://onthehuman.org/2009/11/the-disenchanted-naturalists-guide-to-reality/comment-page-1/#comment-570

Monday, August 22, 2011

Cycles of Time


I’ve been reading Roger Penrose’s new book, Cycles of Time. I’ll be quoting some excerpts to illustrate my post.

He proposes a cyclical theory of the universe, according to which the final phase of the universe resembles the initial phase, and therefore circles back to the beginning–to be reborn–like the ancient myth of eternal return. On his view, the universe has a recurring lifecycle.

In the course of his exposition he discusses the measurement of time, which is contingent on the existence of massive particles. Yet in his cosmology, black holes devour massive particles, leaving the universe, in its final stages, emitting massless radiation.

An implication of his cosmological model, as I understand it, is that every time the universe is “reborn,” it resets the clock.

An ironic consequence of his cosmology is that it resembles a secularized version of omphalism, with its distinction between prochronic time and diachronic time. Cyclical time rather than linear time.

My immediate point is not to defend either omphalism or conformal cyclical cosmology. My point, rather, is that no scientist would attack Penrose’s theory because it’s “deceptive.” They might attack his theory on other grounds, but not because it’s “deceptive.”

They’d allow the universe to be whatever it is. That’s something to be discovered. Not something we prescribe or proscribe in advance.

It is important for the physical basis of general relativity theory that extremely precise clocks actually exist in Nature, at a fundamental level…for there is a clear sense in which any individual (stable) massive particle plays a role as a virtually perfect clock…any stable massive particle behaves as a very precise quantum clock, which “ticks away” with the specific frequency in exact proportion to its mass.
 
To build a clock we do need mass. Massless particles (e.g. photons) alone cannot be used to make a clock, because their frequencies would be zero; a photon would take until eternity before its internal “clock” gets even to its first “tick”!
 
According to a massless particle, the passage of time is as nothing. Such a particle can even reach eternity before encountering the first “tick” of its internal clock….To put this another way, it would appear that rest-mass is a necessary ingredient for the building of a clock, so if eventually there is little around which has any rest-mass, the capacity for making measurements of the passage of time would be lost.
 
Photons and gravitons are both massless, so it seems not unreasonable to adopt a philosophy, relevant to the very remote future, that since, in a very late stage in the universe’s history it would in principle be impossible to build a clock out of such material, then the universe itself, in the remote future, would somehow “lost track of the scale of time”…

R. Penrose, Cycles of Time (Alfred A. Knopf 2011), 93-94, 146, 150.