Pages

Monday, December 26, 2016

Bayesian epistemology

Bayesian epistemology has a following both among some Christian philosophers and among some secular philosophers. Now perhaps I just don't get it, but I don't share their enthusiasm:

1. I've always found it artificial to divvy up probability into prior and posterior probability. I guess the idea goes like this: What are the odds that a car with a particular license plate will be in the airport parking garage? 

Well, you could begin with an abstract figure, based on general background information. Say, the total number of cars in the US (plus a few from Canada). Or perhaps you could narrow it down to the total number of cars in the geographical region serviced by that airport. The odds of a car with that license plate would be 1 out of that total. That would be the prior probability. And it would be staggeringly low.

But suppose you know that car belongs to an airport employee. Based on that specific information, you now update the probability. That's the posterior probability.

Okay, but if you already have all the available information, why even begin with a prior probability, as if you don't know that the car belongs to an airport employee? 

Why set it up so that you begin with something astronomically unlikely, which must then be overcome by the addition of specific information? If you already have the specific information at your fingertips, why the two-step process? Why pretend that the odds of a car in the parking garage with that license plate must meet some abstract threshold if you knew all along that it belongs to an airport employee? 

And in that event, aren't the abstract odds simply irrelevant? That would only be a starting-point if you didn't have specific  information. But if you happen to know that the car belongs to an airport employee, aren't the odds a distraction? Aren't probabilities beside the point? You don't need to offset the prior improbability to believe there's a car in the parking garage with that license plate. You can just see that it's there. Or, if that's reported to you, and if, in addition, you're told the  it belongs to an airport employee, then there's nothing unlikely about the fact there's a car in the parking garage with that license plate–since that's his car! 

And that that point, why would the prior probability even figure in the overall assessment? Seems to me that's only germane if all you have to work with is general background knowledge. 

2. I also agree with critics like William Dembski and W. L. Craig that we can't assign prior probability values to divine intent. Personal agency is unpredictable in a way that natural processes are not. 

3. Finally, I disagree with apologists who think reported miracles must meet a higher evidential threshold to be credited. Higher than mundane claims. (Perhaps, though, that's not intrinsic to Bayesian epistemology.) 

If a classmate tells he that he saw his sister levitating in the bedroom, and that's all I have to go by, I discount the claim. Unless I have reason to think there's no natural explanation, I don't find claims like that credible.

If, however, there's evidence that his sister is a demoniac, then (assuming levitation is symptomatic of demonic possession) the report becomes credible. However, I wouldn't say that demands a higher standard of evidence. Rather, it simply demands relevant evidence. 

Perhaps the apologists would say that proves there point. The prior probability of levitation is remote. If, however, I have countervailing evidence, that may shift the probability. 

And that makes sense if I don't know any more about it. If, though, I happen to have that additional evidence, why should I separate the total evidence into prior and posterior probability? Why compartmentalize the evidence when I have the extra evidence to better assess the claim?  

1 comment:

  1. Bayesian probability is definitely slippery in such contexts. I tend to turn off the moment someone appeals to it. Scott Alexander has some relevant remarks here.

    I find Bayesian probability very helpful in some contexts. For instance, it is very helpful for explaining why a generally administered 95% accurate medical test for a condition that only 1% of the population has is quite probably a bad idea, as most of the positive results will be false ones. It also, controversially, helps to explain why the popular 'two identical CVs save for race/gender/etc.' tests for discrimination are more complicated than they appear.

    ReplyDelete