Sunday, December 03, 2017

Where do you draw the line?

One conventional way of framing ethical debates is to ask, "Where do you draw the line?" This often takes the form of hypothetical scenarios which a critic stress-tests the position of the opposing side by taking their standard to a logical extreme. The assumption is that if a moral standard is a matter of principle, then you should be able extend that indefinitely. 

The objective of this tactic is to force the opposing side to either balk or bite the bullet. If they balk, then the allegation is that they lack conviction in their own position. They defend it in easier cases, but waffle and wobble in tough cases. 

Conversely, if they bit the bullet, then this demonstrates how fanatical they are. Their morality becomes indistinguishable from amorality. 

Ironically, both sides use the same tactic against the opposing side. Deontologists taunt consequentialists with worst-cases scenarios for their position while consequentialists taunt deontologists with worst-case scenarios for their position. 

Take the ticking timebomb scenario. Suppose a deontologist says it's intrinsically wrong to torture a terrorist to save innocent lives. A consequentialist retorts by upping the ante to see if the deontologist will crack. What about torturing a terrorist to save elementary school students? While, for some people, it may sound morally monstrous to ever justify torture, what if the alternative is morally monstrous? 

However, the deontologist can turn tables. If the terrorist is impervious to torture, or if it takes too long to break him, what about torturing his 5-year-old son in his presence to make him divulge the whereabouts of the bomb? 

At the bottom of this post I quote examples in which consequentialism and deontology are both vulnerable to moral dilemmas. 

I think this is sometimes a legitimate tactic. I use it myself from time to time. 

However, God hasn't given us a magic formula which we can whip out in every real or hypothetical situation. Must I be able to answer that question in every situation to answer that question in any situation? Am I morally inconsistent if I lack a uniform answer in every conceivable situation, or every logical extension of a real or hypothetical situation?

In secular ethics, it's always going to be arbitrary. Either arbitrarily stipulated absolutes or an arbitrary cutoff point. In secular ethics, the starting-point is artificial. 

In Christian ethics it's somewhat easier because we have some preset boundaries. So we can start there, using that as a reference point or limiting principle. In Christian ethics, some actions are intrinsically right or wrong. So there are some cases we can take off the table. They don't require further consideration. 

There are, however, other cases where extenuating circumstances change the moral character of the action. 

The Bible is not an encyclopedia of ethics that gives us ready-made answers for every moral question. And while the Bible contains a great deal of implicit guidance which we can tease out, we still find ourselves in situations where we must supplement biblical principles with moral intuition. 

Yet moral intuition is tricky. Moral dilemmas play on conflicting moral intuitions. So which intuition takes precedence? How do we referee conflicting moral intuitions?

Likewise, there are borderline cases where our moral intuition lets us down. Likewise, we may be confident in ordinary situations, but lose confidence in extraordinary situations. 

The express purpose of some thought-experiments is to generate moral dilemmas. By definition, there's no good answer. It's not a failure on your part if you don't have a good answer to a thought-experiment that was designed to preemptively eliminate good answers, leaving you with nothing but horrendous answers to choose from.

If God doesn't want us to be trapped in real-life situations like that, then it's up to him to providentially coordinate events so that we're never boxed into absolute moral dilemmas. In predestinarian theological traditions, that should be possible.

In freewill theism, God may lack ultimate control over the variables. If so, it's hard to see how God can justly blame us for acting in situations beyond our control and his control alike. 

On a related note, there are situations where we must leave the results in God's hands. We didn't create the situation. We do the best we can in that situation, but it's up to God to limit the damage. We are finite agents with finite wisdom, foresight, and power. 

In considering the question, "Where do you draw the line?", we need to consider a preliminary question. Is this the kind of case where that's a reasonable question to ask? 

Sometimes that's a good question to ask. Sometimes there are good answers. Sometimes not having a good answer shows the position was ad hoc.

But there are other cases where may be be at a loss. The fact that we can't answer that question in some cases doesn't automatically falsify our position in cases where we're able to answer it. The fact that we can give reasons for our answer in some cases doesn't necessarily nullify those reasons when we run out of reasons if the principle is extended to more extreme or ambiguous situations. 


Another problem for utilitarianism is that it seems to overlook justice and rights. One common illustration is called Transplant. Imagine that each of five patients in a hospital will die without an organ transplant. The patient in Room 1 needs a heart, the patient in Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on. The person in Room 6 is in the hospital for routine tests. Luckily (for them, not for him!), his tissue is compatible with the other five patients, and a specialist is available to transplant his organs into the other five. This operation would save their lives, while killing the “donor”. There is no other way to save any of the other five patients (Foot 1966, Thomson 1976; compare related cases in Carritt 1947 and McCloskey 1965).

We need to add that the organ recipients will emerge healthy, the source of the organs will remain secret, the doctor won't be caught or punished for cutting up the “donor”, and the doctor knows all of this to a high degree of probability (despite the fact that many others will help in the operation). Still, with the right details filled in, it looks as if cutting up the “donor” will maximize utility, since five lives have more utility than one life (assuming that the five lives do not contribute too much to overpopulation). If so, then classical utilitarianism implies that it would not be morally wrong for the doctor to perform the transplant and even that it would be morally wrong for the doctor not to perform the transplant. Most people find this result abominable. They take this example to show how bad it can be when utilitarians overlook individual rights, such as the unwilling donor's right to life.

Utilitarians can bite the bullet, again. They can deny that it is morally wrong to cut up the “donor” in these circumstances. Of course, doctors still should not cut up their patients in anything close to normal circumstances, but this example is so abnormal that we should not expect our normal moral rules to apply, and we should not trust our moral intuitions, which evolved to fit normal situations (Sprigge 1965). Many utilitarians are happy to reject common moral intuitions in this case, like many others (cf. Singer 1974, Unger 1996, Norcross 1997).

Most utilitarians lack such strong stomachs (or teeth), so they modify utilitarianism to bring it in line with common moral intuitions, including the intuition that doctors should not cut up innocent patients. One attempt claims that a killing is worse than a death. The doctor would have to kill the “donor” in order to prevent the deaths of the five patients, but nobody is killed if the five patients die. If one killing is worse than five deaths that do not involve killing, then the world that results from the doctor performing the transplant is worse than the world that results from the doctor not performing the transplant. With this new theory of value, consequentialists can agree with others that it is morally wrong for the doctor to cut up the “donor” in this example.

A modified example still seems problematic. Just suppose that the five patients need a kidney, a lung, a heart, and so forth because they were all victims of murder attempts. Then the world will contain the five killings of them if they die, but not if they do not die. Thus, even if killings are worse than deaths that are not killings, the world will still be better overall (because it will contain fewer killings as well as fewer deaths) if the doctor cuts up the “donor” to save the five other patients. But most people still think it would be morally wrong for the doctor to kill the one to prevent the five killings. The reason is that it is not the doctor who kills the five, and the doctor's duty seems to be to reduce the amount of killing that she herself does. In this view, the doctor is not required to promote life or decrease death or even decrease killing by other people. The doctor is, instead, required to honor the value of life by not causing loss of life (cf. Pettit 1997).

This kind of case leads some consequentialists to introduce agent-relativity into their theory of value (Sen 1982, Broome 1991, Portmore 2001, 2003). To apply a consequentialist moral theory, we need to compare the world with the transplant to the world without the transplant. If this comparative evaluation must be agent-neutral, then, if an observer judges that the world with the transplant is better, the agent must make the same judgment, or else one of them is mistaken. However, if such evaluations can be agent-relative, then it could be legitimate for an observer to judge that the world with the transplant is better (since it contains fewer killings by anyone), while it is also legitimate for the doctor as agent to judge that the world with the transplant is worse (because it includes a killing by him). In other cases, such as competitions, it might maximize the good from an agent's perspective to do an act, while maximizing the good from an observer's perspective to stop the agent from doing that very act. If such agent-relative value makes sense, then it can be built into consequentialism to produce the claim that an act is morally wrong if and only if the act's consequences include less overall value from the perspective of the agent. This agent-relative consequentialism, plus the claim that the world with the transplant is worse from the perspective of the doctor, could justify the doctor's judgment that it would be morally wrong for him to perform the transplant. A key move here is to adopt the agent's perspective in judging the agent's act. Agent-neutral consequentialists judge all acts from the observer's perspective, so they would judge the doctor's act to be wrong, since the world with the transplant is better from an observer's perspective. In contrast, an agent-relative approach requires observers to adopt the doctor's perspective in judging whether it would be morally wrong for the doctor to perform the transplant. This kind of agent-relative consequentialism is then supposed to capture commonsense moral intuitions in such cases.

Agent-relativity is also supposed to solve other problems. W. D. Ross (1930, 34–35) argued that, if breaking a promise created only slightly more happiness overall than keeping the promise, then the agent morally ought to break the promise according to classic utilitarianism. This supposed counterexample cannot be avoided simply by claiming that keeping promises has agent-neutral value, since keeping one promise might prevent someone else from keeping another promise. Still, agent-relative consequentialists can respond that keeping a promise has great value from the perspective of the agent who made the promise and chooses whether or not to keep it, so the world where a promise is kept is better from the agent's perspective than another world where the promise is not kept, unless enough other values override the value of keeping the promise. In this way, agent-relative consequentialists can explain why agents morally ought not to break their promises in just the kind of case that Ross raised.

Similarly, critics of utilitarianism often argue that utilitarians cannot be good friends, because a good friend places more weight on the welfare of his or her friends than on the welfare of strangers, but utilitarianism requires impartiality among all people. However, agent-relative consequentialists can assign more weight to the welfare of a friend of an agent when assessing the value of the consequences of that agent's acts. In this way, consequentialists try to capture common moral intuitions about the duties of friendship (see also Jackson 1991).


Kant's bold proclamation that “a conflict of duties is inconceivable” (Kant 1780, p.25) is the conclusion wanted, but reasons for believing it are difficult to produce. 

Fourth, there is what might be called the paradox of relative stringency. There is an aura of paradox in asserting that all deontological duties are categorical—to be done no matter the consequences—and yet asserting that some of such duties are more stringent than others. A common thought is that “there cannot be degrees of wrongness with intrinsically wrong acts… (Frey 1995, p.78 n.3). Yet relative stringency—“degrees of wrongness”—seems forced upon the deontologist by two considerations. First, duties of differential stringency can be weighed against one another if there is conflict between them, so that a conflict-resolving, overall duty becomes possible if duties can be more or less stringent. Second, when we punish for the wrongs consisting in our violation of deontological duties, we (rightly) do not punish all violations equally. The greater the wrong, the greater the punishment deserved; and relative stringency of duty violated (or importance of rights) seems the best way of making sense of greater versus lesser wrongs.

Fifth, there are situations—unfortunately not all of them thought experiments—where compliance with deontological norms will bring about disastrous consequences. To take a stock example of much current discussion, suppose that unless A violates the deontological duty not to torture an innocent person (B), ten, or a thousand, or a million other innocent people will die because of a hidden nuclear device. If A is forbidden by deontological morality from torturing B, many would regard that as a reductio ad absurdum of deontology.

Deontologists have six possible ways of dealing with such “moral catastrophes” (although only two of these are very plausible). First, they can just bite the bullet and declare that sometimes doing what is morally right will have tragic results but that allowing such tragic results to occur is still the right thing to do. Complying with moral norms will surely be difficult on those occasions, but the moral norms apply nonetheless with full force, overriding all other considerations. We might call this the Kantian response, after Kant's famous hyperbole: “Better the whole people should perish,” than that injustice be done (Kant 1780, p.100). One might also call this the absolutist conception of deontology, because such a view maintains that conformity to norms has absolute force and not merely great weight.

This first response to “moral catastrophes,” which is to ignore them, might be further justified by denying that moral catastrophes, such as a million deaths, are really a million times more catastrophic than one death. This is the so-called “aggregation” problem, which we alluded to in section 2.2 in discussing the paradox of deontological constraints. John Taurek famously argued that it is a mistake to assume harms to two persons are twice as bad as a comparable harm to one person. For each of the two suffers only his own harm and not the harm of the other (Taurek 1977). Taurek's argument can be employed to deny the existence of moral catastrophes and thus the worry about them that deontologists would otherwise have. Robert Nozick also stresses the separateness of persons and therefore urges that there is no entity that suffers double the harm when each of two persons is harmed (Nozick 1974). (Of course, Nozick, perhaps inconsistently, also acknowledges the existence of moral catastrophes.) Most deontologists reject Taurek's radical conclusion that we need not be morally more obligated to avert harm to the many than to avert harm to the few; but they do accept the notion that harms should not be aggregated. 

The second plausible response is for the deontologist to abandon Kantian absolutism for what is usually called “threshold deontology.” A threshold deontologist holds that deontological norms govern up to a point despite adverse consequences; but when the consequences become so dire that they cross the stipulated threshold, consequentialism takes over (Moore 1997, ch.17). A may not torture B to save the lives of two others, but he may do so to save a thousand lives if the “threshold” is higher than two lives but lower than a thousand.

There are two varieties of threshold deontology that are worth distinguishing. On the simple version, there is some fixed threshold of awfulness beyond which morality's categorical norms no longer have their overriding force. Such a threshold is fixed in the sense that it does not vary with the stringency of the categorical duty being violated. The alternative is what might be called “sliding scale threshold deontology.” On this version, the threshold varies in proportion to the degree of wrong being done—the wrongness of stepping on a snail has a lower threshold (over which the wrong can be justified) than does the wrong of stepping on a baby.

Threshold deontology (of either stripe) is an attempt to save deontological morality from the charge of fanaticism. It is similar to the “prima facie duty” version of deontology developed to deal with the problem of conflicting duties, yet threshold deontology is usually interpreted with such a high threshold that it more closely mimics the outcomes reached by a “pure,” absolutist kind of deontology. Threshold deontology faces several theoretical difficulties. Foremost among them is giving a theoretically tenable account of the location of such a threshold, either absolutely or on a sliding scale (Alexander 2000; Ellis 1992). Why is the threshold for torture of the innocent at one thousand lives, say, as opposed to nine hundred or two thousand? Another problem is that whatever the threshold, as the dire consequences approach it, counter-intuitive results appear to follow. For example, it may be permissible, if we are one-life-at-risk short of the threshold, to pull one more person into danger who will then be saved, along with the others at risk, by killing an innocent person (Alexander 2000). Thirdly, there is some uncertainty about how one is to reason after the threshold has been reached: are we to calculate at the margin on straight consequentialist grounds, use an agent-weighted mode of summing, or do something else? A fourth problem is that threshold deontology threatens to collapse into a kind of consequentialism. Indeed, it can be shown that the sliding scale version of threshold deontology is extensionally equivalent to an agency-weighted form of consequentialism (Sen 1982).

The remaining four strategies for dealing with the problem of dire consequence cases all have the flavor of evasion by the deontologist. Consider first the famous view of Elizabeth Anscombe: such cases (real or imagined) can never present themselves to the consciousness of a truly moral agent because such agent will realize it is immoral to even think about violating moral norms in order to avert disaster (Anscombe 1958; Geach 1969; Nagel 1979). Such rhetorical excesses should be seen for what they are, a peculiar way of stating Kantian absolutism motivated by an impatience with the question.

Another response by deontologists, this one most famously associated with Bernard Williams, shares some of the “don't think about it” features of the Anscombean response. According to Williams (1973), situations of moral horror are simply “beyond morality,” and even beyond reason. (This view is reminiscent of the ancient view of natural necessity, revived by Sir Francis Bacon, that such cases are beyond human law and can only be judged by the natural law of instinct.) Williams tells us that in such cases we just act. Interestingly, Williams contemplates that such “existentialist” decision-making will result in our doing what we have to do in such cases—for example, we torture the innocent to prevent nuclear holocaust.

Surely this is an unhappy view of the power and reach of human law, morality, or reason. Indeed, Williams (like Bacon and Cicero before him) thinks there is an answer to what should be done, albeit an answer very different than Anscombe's. But both views share the weakness of thinking that morality and even reason runs out on us when the going gets tough.

Yet another strategy is to divorce completely the moral appraisals of acts from the blameworthiness or praiseworthiness of the agents who undertake them, even when those agents are fully cognizant of the moral appraisals. So, for example, if A tortures innocent B to save a thousand others, one can hold that A's act is morally wrong but also that A is morally praiseworthy for having done it.

Deontology does have to grapple with how to mesh deontic judgments of wrongness with “hypological” (Zimmerman 2002) judgments of blameworthiness (Alexander 2004). Yet it would be an oddly cohering morality that condemned an act as wrong yet praised the doer of it. Deontic and hypological judgments ought to have more to do with each other than that. Moreover, it is unclear what action-guiding potential such an oddly cohered morality would have: should an agent facing such a choice avoid doing wrong, or should he go for the praise?

The last possible strategy for the deontologist in order to deal with dire consequences, other than by denying their existence, as per Taurek, is to distinguish moral reasons from all-things-considered reasons and to argue that whereas moral reasons dictate obedience to deontological norms even at the cost of catastrophic consequences, all-things-considered reasons dictate otherwise. (This is one reading of Bernard William's famous discussion of moral luck, where non-moral reasons seemingly can trump moral reasons (Williams 1975, 1981); this is also a strategy some consequentialists (e.g., Portmore 2003) seize as well in order to handle the demandingness and alienation problems endemic to consequentialism.) But like the preceding strategy, this one seems desperate. Why should one even care that moral reasons align with deontology if the important reasons, the all-things-considered reasons that actually govern decisions, align with consequentialism?

No comments:

Post a Comment