I’ve been asked to discuss what’s wrong with Derk Pereboom’s so-called “Manipulation Argument” (or “Four Case”) argument against Compatibilism, which is of course the view that causal determinism is compatible with free will. Pereboom argues it’s not. You can find different kinds of critiques of his argument; by, for example, John Danaher; or Jay Spitzley, whose critique I think is more spot on. Spitzley’s critique also involves a discussion of the science of intuition, and how it affects philosophy’s overreliance on “intuitionist” methods (as Pereboom’s argument does), as well as a valuable summary and bibliography on a related, and broader problem in the field of philosophy that I have called out myself: the fact that philosophers have an annoying tendency to Hose Their Thought Experiments—another example of which I wrote on this month already regarding Robert Nozick’s So-Called Experience Machine. Here today I am adding another. Indeed, Pereboom’s mistake is very similar to Nozick’s.

Important Background

Not only am I convinced Compatibilism is true (you’ll find my most extensive discussion of this in my section on free will in Sense and Goodness without God as well as a past series of blog articles since, and all of the following will rely on the contents of these), but I also think free will could only exist as the output of a continuous chain of causes, such that any account of “responsibility-bearing” free will (the only kind anyone cares about) that involves any pertinent break in that chain of causes would actually eliminate free will. For example, if you insist free will is supposed to mean making decisions without being causally determined by one’s character, desires, and reasoning, then you have declared a self-contradiction. For that would mean your decisions are not only random, but causally disconnected from who you are. So in no way can anyone claim you caused those actions. The causal link between “you” and those actions isn’t even present in that case; and you certainly cannot be held responsible for something you did not even cause to happen.

Conversely, if you “break causation” too far back (e.g. you allow people’s character to arise at random, uncaused by any external events), then you are no longer removing their responsibility. Because it does not matter how you came to want some outcome; all we are judging is whether you did. Because we need to know what to do with you, what sort of person you are, and how to cause or prevent others like you acting that way in future. In other words, we don’t need to know how you became good or evil to determine whether you are good or evil, and thus how we should respond to you now. And the role of free will as a concept in society (in personal relations and law, and in motivating self-actualization) is solely to determine whether “you” made a given choice or not—or whether your desire, your preference, was thwarted (physically or by another person).

This is why free will as understood in Western law (all the way from Model Penal Codes to U.S. Supreme Court precedents) is thoroughly compatibilist in its construction. Any attempt by a perp to argue they were fated to be a criminal will be met with the response, “Well, then you were also fated to be punished for your crimes.” They don’t get off the hook; all they’ve done is explain how things turned out the way they did—and thereby revealed what we could do differently to prevent future repetitions of that perp’s behavior (by them or others). This is because responsibility and desert are components of a social system that have a function. And that function does not change when background causes do. Background causes are of interest to other operators in reengineering the social system (such as to produce more heroes and fewer criminals); but they aren’t relevant to the separate case of what to do with the products of that social system—the people already produced. What to do with a specific malfunctioning machine is a different question from how to make better machines.

This does mean there is no such thing as “basic desert” in the peculiar sense of just deserving praise or blame for no functional reason. If praise and blame perform no function, then they cease to be warranted—beyond arbitrary emotivism, which produces no objective defense. Like whether you like chocolate or not, praise and blame would then cease to be anything you can argue with anyone, as if anyone “should” like chocolate or not, because it would cease to be the case that anyone “should” like anything in such a sense, and thus no sense in which anyone “should” praise or blame anyone for anything. “Well I just like that” is not a persuasive argument that anyone should like it too. The purpose of a behavior, like praise and blame, is therefore fundamental to defending it as anything anyone should emulate. Remaining anchored to the function of assigning responsibility is therefore essential to any understanding of what it takes to produce it, and thus to any understanding of the kind of free will that does.

Pereboom’s Argument

In a nushell, Pereboom presents a Sorites-style “slippery slope” argument, starting from what he thinks is a clear case of nullifying someone’s free will (a typical “mad scientist” scenario involving a fictional remote control of someone’s neural system) and then moving that scenario by successive steps of analogy closer and closer to just any deterministic world system (over the course of “four cases” in all), establishing (supposedly) that there is “no difference” between the first manipulation case and just any causally deterministic world whatever. Some philosophers attack his slippery slope fallacy, arguing that somewhere along the line the cases become disanalogous and thus don’t carry his point. Others question whether he has even correctly described the original case on which this whole analysis depends. I’m in the latter camp.

So I will only bother discussing that. The “foundation case” of Pereboom’s argument is a scenario in which mad scientists have a secret machine wired into a certain Mr. Plum’s brain that “will produce in him a neural state that realizes a strongly egoistic reasoning process” (and does nothing else) precisely when they determine that that is the only causal link still needed to motivate him to kill a certain Mr. White, such that without adding that cause to the mix at just the right moment, he would not have killed them (even though—and this is key to Pereboom’s argument—Plum’s character was already naturally “frequently egoistic” and just not manifesting so in this one particular case but for this neural machine being activated).

It is silly to go to such lengths to construct this scenario, because we actually already have such scenarios in the real world that have been very thoroughly dealt with in our legal system—and they don’t turn out the way Pereboom thinks. If we just walked up to Mr. Plum and asked him to kill Mr. White in exchange for a hundred thousand dollars, we would have created exactly the scenario Pereboom is trying to imagine: Plum would not have killed White but for our intervention; our intervention succeeds by stimulating the requisite egoistic thinking in Mr. Plum otherwise in that moment absent (using the sound of our voice, maybe the sight of cash, and his neural machinery already present in his brain); and his acting on the offer remains in accord with his statistically frequent character (as otherwise he’d turn us down; and likewise Pereboom’s neural machine wouldn’t work).

No one releases such a Mr. Plum from responsibility. He will be adjudged fully responsible in this case in every court of law the world over. Pereboom’s argument thus can’t even get off the ground. He has simply falsely described his scenario as one that releases Mr. Plum from responsibility. But it doesn’t. Mr. Plum has not been tricked—he full well knows that he is choosing to kill someone, and for a reason that even Pereboom’s argument entails is both egoistic (and thus not righteous) and derives entirely from Mr. Plum’s own thinking—because all that the “mad scientists” have done is tip him back into his frequent “egoistic thinking”; they have not inserted a delusion into his brain that becomes his reason for killing White (had they done that, we’d be closer to the real-world case of schizophrenics committing crimes on a basis of uncontrollable false beliefs). And that’s all that courts of law require to establish guilt: a criminal act, performed with a criminal intent. “But they had a machine in his brain and pushed a button to activate it” would bear no relevance whatever at his trial. That would be no different, legally, from “pushing his buttons” metaphorically, by simply persuading him to do something, ginning him up into selfish violence. Guilt stems from his agreeing to go through with it “for egoistic reasons.” He is aware of what he is choosing to do, and chooses to do it anyway.

All the conditions of guilt are thus present. The machine changes nothing. Mr. Plum assented to the action. That he would not have thought to do it but for someone instigating it is irrelevant—apart from the fact that the instigators are also guilty of the same crime: because, in case you forgot, asking someone to kill someone else will land you both in jail, asker and askee. So, too, Pereboom’s mad scientists. Both they and Mr. Plum will go down for the crime of killing Mr. White. Just as in every other like case that already happens in the real world. Conversely, had our “mad scientists” manipulated events to trick Mr. Plum into mistakenly killing Mr. White in self defense, only they would be convicted. They then have committed murder; Mr. Plum has not. He acted without criminal intent. Because self defense is a legitimate defense at law; and it only requires the reasonable belief in the actor that what they are doing is morally and legally permitted. So, tricking Mr. Plum into doing something that he reasonably believes isn’t illegal, which unbeknownst to him is illegal, absolves him of blame. But instigating him (by any device, from persuasion to neural robotics) into doing what he knows is illegal does not absolve him. His knowledge that what he is doing is wrong, and assenting to it anyway, is what makes him responsible. The specific reasons he had, or where they came from, are irrelevant to that point.

Pereboom’s argument thus fails from its very first step, all from simply failing to realize he was describing a scenario that is already standard and dealt with routinely in the most experienced institution for ascertaining responsibility in human history: the modern world legal system, a product of thousands of years of advancement and analysis. Philosophers often do this: argue from the armchair in complete disconnect from the real world and all that it could have taught them had they only walked outside and looked around. Philosophers need to stop doing that. Indeed, philosophers who hose this simple procedure should have their Philosopher Card taken away and be disinvited from the whole field until they take some classes on How Not to Be a Doof and then persuade us they’ll repent and start acting competently for a change.

It is by the same reasoning that Spitzley’s example of a certain “Bob” who “has a migraine that causes his reasoning to be slightly altered in such a way that he decides to kill David and he would have not made this decision if he had not had this migraine” does not describe a case any legal system would rule Bob innocent in. It does not matter what drove you to do something, as long as you knew it was wrong and did it anyway. Whereas a migraine that caused Bob to hallucinate David attacking him, and then mistakenly kill him in self defense, would get Bob acquitted, because then he is not acting with criminal intent. The likes of Pereboom would understand this if he would bother studying how well-practiced legal systems assign responsibility, instead of just making shit up.

Consider even Spitzley’s point regarding “Mele’s (1995) case of an agent who has someone else’s values implanted into them overnight.” If this were Bob’s fate, he would still be found guilty. Because it does not matter how he became who he is; all that matters is that “who he is” assented to and performed the crime. There is no functional difference between Mele’s scenario and simply reality as it is: people’s character is always in some part a product of outside influences. We don’t exist in a world where this happens “overnight,” but the time scale is irrelevant. Whether twenty years or one night, how you became evil does not somehow magically make you “not evil.” And here I think there is a whole tangle of confusions hosing some people’s intuitions about Mele-style cases, such as conflating the equivalent to “Bob” in Mele’s scenario before the change in values with the Bob after that change. The fact that Bob was a better person once does not make him not a bad person now (or vice versa). Yes, early Bob would never have killed Dave; but late Bob would, and did. And we are presented with, and must judge, late Bob; early Bob no longer exists, and he wasn’t the one who killed Dave. So our intuitions about early Bob are simply not relevant to judging late Bob.

Good Fiction Is Better at This

The reality of what I’ve just explained has already been far better explored in fiction than these confused attempts by Pereboom: consider the BuffyVerse character of Angel. He was a vampire who, as a vampire, was a sociopathic monster, but when in possession of his soul (later inserted by “gypsy magic,” and repeatedly lost and regained by various incidental devices) was still a vampire but also a genuinely heroic person who despised his other “self,” whom most thus distinguished by the Latin form of his name, Angelus. As this plays out realistically in his story arc across two television shows (and subsequent graphic novels), it becomes intuitively obvious that Angel should never be judged by the actions of Angelus. When he is one or the other, he is literally a different person.

Here we have “early Bob” and “late Bob,” except that by a fictional device he can actually instantly switch back and forth between them (typically for reasons beyond his choice or control). But this does not change the analysis of his guilt and character: Angel simply isn’t responsible for what Angelus does, because Angel isn’t the one making those choices (and vice versa). We have no law governing such a case because reality has never presented such a bizarre conundrum. But I am certain if it became common, our legal system would respond just as I predict: Angel would only be guilty of what Angelus does if Angel assentingly chose to become Angelus with criminal intent; just like someone who asks or hires Bob to kill Dave. Beyond that the concern at law would simply be what steps are needed to be assured Angel will “stay on his meds” as it were. In other words, barring any other available solution, Angel would be treated like a schizophrenic: if ever committing crimes as Angelus, sentenced to treatment that will cause his reversion to the state of being the innocent Angel (which is akin to a sentence of execution declared upon Angelus—or as depicted in the story, a “forced imprisonment” inside Angel’s body). Which for Angel would be functionally equivalent to being acquitted “by reason of mental defect” today.

This is played out credibly well in a different bit of fiction: Hal 9000 in the film and novel 2010 is regarded as guilty of murder only by reason of a conflicting programming code, such that as soon as that is corrected, he is fully restored as a reliable colleague. Murderous Hal simply is no longer the same person as “fixed” Hal. So we don’t ascribe the guilt of one to the other. Nor should we. The only thing that seems counter-intuitive about this is that it is possible to instantly fix someone, converting them from a malevolent to a benevolent person with just some keystrokes. But the reason that feels counter-intuitive is that it doesn’t exist—that kind of thing simply isn’t a part of our real-world experience, and isn’t an available option (yet, at least) in dealing with malevolent persons among us. But we’ve already established in courts of law the analytical logic needed to cope with it if we had to. And that predictable result simply isn’t what Pereboom imagines.

Conclusion

Pereboom trips himself up not only by ignoring all pertinent real-world evidence, but also by gullibly manipulating himself (and thus his readers) with well-known but too-often-ignored tricks of psychology. As Spitzley puts it in his own case against Pereboom’s disastrously hosed thought experiment:

I argue that something independent of the features of determinism best explains why people judge that manipulated agents lack moral responsibility. Therefore, something other than determinism would be incompatible with moral responsibility and the manipulation argument for incompatibilism is unsuccessful. Given the way in which Derk Pereboom’s manipulation argument is presented, it seems extremely likely that seemingly irrelevant psychological influences, such as the order in which he presents his cases, provide a better explanation than the one which Pereboom offers for why readers intuit that determined agents are not morally responsible.

The only actual thing that matters in assigning responsibility (and thus in determining the presence of free will) is whether Mr. Plum’s will is what caused the action, and not someone else’s will overriding or replacing his; or otherwise some impersonal force that has thwarted (as in, acted against) his will. This is why coercion eliminates free will: someone else’s will is being substituted for Plum’s (who actually does not want to kill White, but is left with no reasonable choice by someone else who does want to kill White). Likewise force majeur: in which cases Plum’s will is not even causally involved in what happens (if someone pushes Plum into White resulting in White’s death, or Plum killed White solely because he jerks uncontrollably from an epileptic seizure). Similarly deception: if Plum is tricked into thinking White has to be legally killed (such as in self defense), then his will was not to murder White at all, but to protect the innocent from him, and the only unfortunate feature of this case is that Plum was uncensurably mistaken about that. We thus absolve him for it. But we don’t absolve him of blame if he’s just “having a bad day” or is persuaded to act knowingly and willingly toward an unsavory end.

Free will thus exists when a party consciously assents to an action, that they themselves caused (meaning their assent—their will—was a necessary part of the causal chain resulting in the outcome), based on their beliefs that we can then adjudge them for acting on. In other words, because Plum’s own will is a necessary and informed cause of Plum’s choice, then and only then we can evaluate Plum himself as a person based on what this demonstrates to us about his intentions (be they criminal or censurable or permissable or heroic). Whereas we cannot do this if the causal chain has been broken and therefore Plum’s will didn’t even cause the outcome, or Plum’s will was opposed to do doing any such thing but forced or tricked into it against his will.

Pereboom’s thought experiment simply completely ignores all this, the actual conditions for moral and legal responsibility in the real world. And that’s just bad philosophy.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading