Wednesday, September 30, 2015

The Terminator

A stock objection to Calvinism is that it would be unjust (or "monstrous") for God to condemn evildoers whom he predestined to commit evil in the first place. They were never a chance to do otherwise. 

Let's assume, for the sake of argument, that determinism (or predeterminism) is incompatible with moral responsibility. Now let's recast the argument by making a comparison.

In the Terminator franchise, a Terminator is a robotic assassin. An artificially intelligent android that's programmed to kill a particular individual. 

(For some reason they are called cyborgs, but from what I can tell, they don't have any human parts. They merely have a human appearance.)

Terminators are like glorified cruise missiles or smart bombs. They don't necessarily need full-blown consciousness. They just need enough (artificial) intelligence to identify the target, ascertain information on the ground, and adapt to varied situations. 

They don't need "consciousness" in the sense of the internal dimension, viz. first-person viewpoint. They don't need to know "what it's like to be me." 

But since this is all hypothetical, we could endow them with consciousness. That's surplus. 

Terminators are fearful in two respects:

i) They have superhuman strength. They are tireless, relentless, and resourceful. Virtually unstoppable. Humans on the run have to sleep. They don't. Even if you get a head start, they will catch up. 

ii) But, if anything, they are even more fearful in another respect: they are utterly pitiless. They's because they are inhuman. Machines. As such, they are incapable of feeling compassion for another human being. They can't project themselves into our mindset. They don't know what it feels like to be human. You can't appeal to their empathy. There's no hook. 

Now, suppose a Terminator is programmed to kill a child, to preempt what he will become. To change the future.

And to make sure they kill the child, they allow themselves a margin of error by planning to wipe out an entire classroom full of second-graders. 

According to the hypothetical under consideration, the Terminator is amoral. Because its actions are programmed, it isn't blameworthy. 

But even if we grant that for the sake of argument, it would be morally imperative to stop the Terminator by any means necessary. Destroy the Terminator before it kills innocent children. 

That's despite the fact (ex hypothesi) that the Terminator isn't a morally responsible agent. Even though it's not culpable, it has no right to endanger the kids.

Neutralizing the Terminator isn't punitive. Rather, it's protecting the innocent. 

BTW, this isn't just hypothetical. There are some real-world analogues. For instance, people on a psychotic drug-high can be dangerous. 

Someone might say that, given a choice, it would be preferable to reprogram the Terminator rather than destroy it. Perhaps so.

However, we don't owe it to the Terminator. A Terminator can, indeed, be reprogrammed. It can be programmed to be a nanny, gardner, chef, quarterback, ballet instructor, or violinist. It can be programed to be masculine or feminine. 

That's because a Terminator is a blank slate. It has raw intelligence. It has great potential. But it has no innate personality or character traits. Its memory is wiped after each mission. 

It isn't supposed to be any particular way. Its identity is essentially indefinite. Whatever the programer wants it to be. 

So it wouldn't be wrong to destroy it rather than reprogram it. You wouldn't be wronging the Terminator. It's not as though it deserves better treatment. For its character is supplied by the programer.  


  1. They are actually humans modified by adding computer parts. That was clear in the last two movies.

    1. Like the vampire mythos and the zombie mythos, what Terminators are is subject to evolution. Initially, I think what Terminators are was driven by budget constraints. The director was making things up on the fly. Improvising on a shoestring budget. This can give rise to inconsistencies. Terminators are predicated to be one thing, but they are depicted as something else, because there wasn't the money or CGI to do it right, or because the director is making snap creative decisions. As the franchise continues, that gets smoothed out.

      It's like some of the continuity problems in Star Trek.

  2. There's also a good amount of stuff about possibly transcending programming in the Sarah Connor Chronicles. It might be in other places too.

  3. On the actual point of your piece, I think there's something to this. If you don't think determinism is compatible with moral responsibility, you have no reason to complain about God mistreating us if we were never morally responsible. But that's not the only argument in this vicinity. You might think it's better for God to create free beings and that God is not being as good as God possibly could be, which many theists would want to avoid, e.g. Leibniz. Unless you think God doesn't have to maximize goodness or actualize the best possible world, there's still that argument. But lots of people think there is no best possible world, and it can always get better, e.g. Aquinas.

  4. Determinism can be incompatible with MR and it be wrong to mistreat a determined person, and most conpatibilists will agree. For example, some kinds of determinings do undermine free will, even compatibilist free will. For example, determinatively forcing or coercing or manipulative bypassing. Suppose I stick a computer chip in a person's brain when they're 1, and it allows me to control them with a joystick. Most compatibilists will not think this person is morally responsible when I use my controller to make him kill the president. Yet, most compatibilists will think it's wrong for me to use my joystick and make it bang it's head into the wall. (If one doesn't like the example, there's clearly simple changes that can be made to get to the same conclusion, and I don't think those cases are opaque, so I leave to the side developing my point in the face of counterexamples.) So I think one would need to think determinism destroys personhood to get the argument to go through. I think there's some Arminians who have been so bold, but I don't think that's a majority view. The tl;dr version: even on incompatibilism, determined things can be moral *subjects*, and we can't treat moral subjects just any way we wish.

    1. The scope of your statement is unclear:

      i) Are you saying it's wrong to harm a determined person on libertarian grounds? Keep in mind that I'm simply discussing freewill theism on its own terms. If, due to determinism, the agent lacks moral responsibility, then how is it wrong or unjust to harm him? He's an amoral agent: beyond good and evil.

      ii) Is your statement confined to certain kinds of determinism which rob the agent of moral responsibility?

      iii) I agree with you on compatibilist grounds, but I'm not discussing my own position.

      iv) By definition, it's wrong to "mistreat" a person. That's not how I cast the issue in the post. If we're going to use "mistreatment" as the frame of reference, the question is whether it's possible to mistreat a determined person vis-a-vis freewill theism.

      v) Is your statement confined to temporary impairment of moral responsibility, or to an agent that has no innate moral center (i.e. a Terminator)?

    2. I was responding to Jeremy's claim: "Ifyou don't think determinism is compatible with moral responsibility, you have no reason to complain about God mistreating us if we were never morally responsible."

      Ad i) I think that's false. Surely libertarians think you shouldn't harm dogs, at least without sufficient justification. But dogs are not morally responsible. Moreover, many libertarians believe there are actual cases of determinism. Open theists, for example. They don't think that God can harm such persons *simply* because they're not morally responsible. It's also wrong to say that determinism removes moral agency, even according to libertarians. Keep in mind that determinism, by their lights, is incompatible with a very specific kind of moral responsibility, namely responsibility as accountability, the kind where you can be held to account for your actions. It's specifically aimed at *basic desert*--deserving praise or blame. Moral agency is a broader category than morally responsible agent, person is broader than both, and moral subject is broader than all.

      ii) My statement is not, but I gave an example of a way of determining that you'd say rules our responsibility yet the person ought not be harmed. I don't see the relevant difference between you and the libertarian here.

      iii) What does that have to do with it? You think responsibility is incompatible with some of the kinds of determinism I brought up, yet you don't think that in virtue of that, that's a license to harm.

      iv) We can use "harm," but of course the harm can't be due to morally justifiable reasons, otherwise both us and libertarians will agree. This is close to mistreatment for all intents and purposes.

      v) Again, I was responding to Jeremy's claim, which left diachronic and synchronic factors out of it. My statement is confined to incompatibilism, which is how you framed your post. Most libertarians won't say that determinism removes "innate moral center." Incompatibilism is not the thesis that moral agency is incompatible with determinism. It's not the thesis that responsibility-as-attributability is incompatible with determinism. Libertarians will *attribute* properties like "evil" or "bad" to agents born determined to be bad--Michael Meyers, say. They'll just say he's not *accountable* for his actions.

    3. I agree with you if the analogy to the machine isn't relevantly disanalogous. I think libertarians would agree too. That is, if we were amoral machines on incompatibilism, I think they'd be inclined to agree. But if we felt pain, had desires for future goods, etc., they'd probably demur. But here's where I think their challenge lies. Let's grant your main point re: the analogy. Should you reprogram? Well, they would say you should if you *loved* the thing. If you loved it you'd want it's good. Destroying it when you could reprogram it isn't consistent with loving it. Their challenge applies, at best, to a all-loving God (that is, a God who loves each and every person).

    4. As Jerry Walls would say, God loves all Terminators. Only a morally monstrous God would refrain from loving a Terminator.

    5. I would say that you've shown why Arminians should avoid the robot and puppet language. If we're relevantly analogous to those things, then Calvinism doesn't have a problem of evil.