I will be answering in my next article the new questions posed in the 2020 iteration of the PhilPapers survey (a new development I just wrote about). But one of those new questions requires a separate article on its own: the one written, “Experience machine (would you enter?): yes or no?” This refers to a poorly articulated thought experiment contrived and badly run by Robert Nozick. Philosophers have a tendency to hose their own thought experiments. This is no exception. So it is difficult to really use the data on this, because I don’t know if the PhilPapers respondents are doing the same thing Nozick did, and misconceiving and thus incorrectly running the experiment, thus dictating their answer differently than if they ran the experiment correctly. So the results here are largely useless, not least because it is not explained why they answered as they did, which is the one thing Nozick was trying to discern.

The basic gist of the experience machine is to ask, if you could go live in a simverse where you could experience all the same pleasures as in the realverse, would you? That isn’t the way Nozick words it, but it distills what he is actually describing; and rewording it thus I believe would change people’s answers, yet without changing what Nozick actually meant (or had to mean, given the argument he tries to make from it), which illustrates how he is inadvertently manipulating results with semantics rather than more clearly describing the scenario he wants to explore to get an accurate and useful answer. Crucial to his experiment is that the “experience machine” can reproduce all pleasures of the real world (so that there is no pleasure-access reason to not plug into it). But this crashes into a tautology when pleasures are only caused by believing certain things are real.

Nozick would certainly try to salvage his intent by specifying, let’s say, that you would be alone in your simverse, and thus all pleasures deriving from interacting with other people there would be fake. But this would undermine his argument: if you know it will be fake (as the experiment requires that you do, certainly at the time of making the choice, as in all Magic Pill thought experiments, cf. my old discussion under “The Magic Pill Challenge”), you will be incapable of deriving the same pleasure from it, yet that is a required condition of the experiment. Hence the machine can’t produce “the same quality” of pleasures, and thus it won’t meet the one essential condition his experiment, and entire argument, requires. Because apart from the question of the reality of human interaction, we already know from VR machines today that at the level of sophistication Nozick’s machine is supposed to obtain, there is no pertinent difference between, for example, climbing a real or a virtual mountain. In both cases you are accomplishing a task by interacting with a presented environment to within your developed abilities.

Really the difference is even less substantive than that. Because there actually literally is no pertinent difference between, for example, “fake simverse sugar” and “realverse sugar,” because this is not a thought experiment: we actually are in that relevant simverse already. Human experience is a simulation constructed by the brain. “Sweetness” does not exist outside our minds; sugar molecules have no such property. It is entirely a fabricated experiential response. Likewise every other aspect of pleasure. And it’s actually impossible for it to be any other way. Experiential pleasure does not and cannot exist in the physical world but as the computed product of information processing: i.e. of an experience machine, in our case the brain. So in actual fact, we are already in Nozick’s “experience machine.”

This would mean the actual options for Nozick’s thought experiment really are: would you prefer to live outside your brain (which is physically and probably logically impossible: experience does not exist anywhere outside some “experience machine” or other) or inside it? No rationally informed person would answer anything other than “inside it, obviously.” Because the alternative is literally choosing to be dead—to unplug from all experiences whatever. Nozick did not realize (nor evidently have most philosophers answering this question realized) that he is simply describing our current situation: we live consciously only because we live inside an experience machine, of just exactly the sort he describes, and we could not live consciously any other way. Hence there is no pertinent difference between, for example, Los Angeles out here, and Los Angeles inside Grand Theft Auto: both have fixed, explorable parameters, from geography to resources to sights and venues; both can be interacted with and changed; and so on. So the only pertinent difference between a simverse and a realverse is merely one of real estate. Is it better there? That’s the only question that matters.

It is clear that Nozick wanted to intend his “experience machine” to be a deceptive device, whereby you aren’t even making decisions but being tricked into thinking you are, and people don’t exist there, you only think they do. And so on. But he doesn’t clearly frame the experiment in those terms—and couldn’t, because it would expose a fatal flaw in it, insofar as it’s supposed to prove something he wants about why people do things. So this is bad philosophy. Running the experiment correctly (the machine can reproduce any real-world pleasure), my answer for PhilPapers here would have been “yes,” a genuine simverse would be better real estate, so I’d certain immigrate, along with 13% of other philosophers apparently, possibly the few who actually noticed what I did about all this; the other 76% are being snowed by Nozick’s faulty semantics, and really answering a different question than we are: whether they’d consent to be deceived into pleasurable cognitive states—as opposed to merely simulated ones, which is not the same thing. But Nozick’s description of the experiment never mentioned being deceived, but hinges entirely on knowing what’s really happening and choosing it anyway. Assuming deception is happening (and thus being chosen) is to run the experiment wrong—or to run a different experiment than described.

The whole experiment should thus be trashed as framed and the actual questions Nozick wanted to answer should have been asked instead: do we prefer mere pleasure as an experience disconnected from what produces it, or does the pleasure we derive from something depend on our beliefs about it being factually true? This is a more interesting question, and more easily answered. Though it is properly a scientific question under the purview of psychology, and not really a question philosophers should claim to be able to answer on their own, there’s enough science to back an answer here: we do indeed derive pleasures from our cognition of circumstances that cannot be obtained without it.

Nozick wants to separate the mere experience of pleasure (like an arbitrary orgasm machine) from the cognitive side of understanding what is producing the pleasure (like sex with an actual person, with whom you are sharing an understanding of their mental states, desires, and pleasure-experiences), so as to argue that, because these are not one-to-one identical, our motivation to do things is not simply pleasure, and therefore “utilitarianism is false.” But this is a string of non-sequiturs. That the cognitive side of what causes a pleasure matters, does not replace pleasure itself as the goal; it merely constrains what things will cause us pleasure (or pleasures of certain kinds and degrees). So the first step in his reasoning fails. You can’t separate pleasure from cognitions about its cause; cognitions about its cause are a source of pleasure. And no form of utilitarianism disregards this fact. So the second step in his reasoning also fails.

Basically, as folk would say, “You can’t get there from here.”

To be clear at this point, I also find all this talk about “pleasures” bad form anyway. What we really prioritize are satisfaction states; which is a pleasurable state in and of itself, but all pursuit of individual pleasures is derivative of this, not fundamental. We pursue pleasures in order to obtain satisfaction states (and there can of course be greater and lesser satisfaction states, hence states that are “more satisfying” than others). Thus “desire utilitarianism” is closer to a correct evaluation of human axiology than traditional utilitarianism, meaning Nozick isn’t even on the right path to any pertinent conclusions about anything here, even from the start. But we can set this aside here, because the same conclusions follow (or don’t) even if we replaced his “pleasures” with our “satisfaction states,” so for convenience I will continue in his idiom.

Like all bad philosophy, Nozick constructed his experiment to rationalize some conclusions he already started with and wanted to be true (in effect, that “pleasure is not our sole reason for doing things, therefore something else motivate us”), which are represented in his given reasons for “not” wanting to be in an experience machine:

  1. We supposedly want things to be real, not just pleasurable (e.g. we want to “actually” win at a game of cards, not merely feel or falsely remember that we did);
  2. We supposedly don’t want to just be floating in a tank or something (e.g. we want our physical bodies at the card table, or to be actually heroic; we don’t want to virtually be there, or to fake it);
  3. Simverses are more limited than realverses (e.g. there might be things in the realverse we can discover or do that weren’t thought of so as to be made possible in the simverse).

But (1) does not contradict the thesis that pleasure is what we seek, as it only ramifies what we will find pleasurable; (2) is demonstrably false (people enjoy “sitting at virtual tables” so much that an entire multi-billion-dollar industry thrives on it: we call them video games, in which we can genuinely “be” honest, clever, heroic, anything we like); and (3) is contradicted by his own thought experiment: he himself stated as a condition that there can be no pleasures accessible in the realverse not accessible in the simverse; in fact his entire experiment depends on that condition. So (3) cannot be a reason not to plug into the machine he described, as it by definition can never be an outcome of doing so. In my experience, Nozick is a rather bad philosopher (this isn’t the only example). Indeed, he has also confused in case (3) yet again (a) a ramification of what we find pleasurable with (b) a reason other than pleasure to pursue something. So he simply isn’t really getting the conclusions he wants; yet, ironically, he is deceiving himself into thinking he has. He’s stuck in his own experience machine.

Of course Nozick may have wanted to specify instead an experiment where, really, the main concern was with whether merely the pleasure alone mattered (the experience of it), such as we derive from human interactions (the only thing that would be meaningfully “absent” in his scenario, as our enjoyment of virtual worlds in video games now proves), or if it mattered that the interactions be real. For example, as with any Magic Pill thought experiment, the notion is whether you would choose to live a lie if it could be guaranteed you’d never know it (though obviously you must know it at the time you choose this state, like Cypher in The Matrix when he asks Agent Smith for this very thing). That does not actually address Nozick’s interest, because if the “comparable pleasure” requires you to falsely believe you are interacting with real people, then his claim that our goal is not pleasure is not supported; all he has shown is that we do set pleasure as our goal, and can merely be tricked into it.

That is uninformative. Think of a romantic relationship, which brings you great pleasure and which you pursue for that very reason, but then you discover it was all a lie, and they were conning you. It does not follow that, therefore, you were not pursuing that romance for pleasure. That conclusion is a non sequitur. So, too, with “Nozick’s” experience machine. It simply can’t get the results he wants. And he fails to detect this, because he can’t even run his own experiment correctly—forgetting that his own description of the experiment rules out his third reason for refusing to plug in to it, not discovering from self-reflection that simulated experiences entail constructing the same explorable environments and the same opportunities for realizing the person you want to be as the real world provides thereby ruling out his second reason for refusing to plug into it, and not realizing that cognition of a state is itself a source of pleasure, or that the two are not properly separable, eliminating every other reason for not wanting to plug into it. One does not pursue the cognition, if the pleasure does not result; and fooling someone into the cognition so as to produce the corresponding pleasure would be rejected as unpleasurable by anyone aware that is happening. Deceiving someone into feeling a pleasure does not demonstrate they pursue anything for reasons other than pleasure; to the contrary, it only demonstrates more assuredly that they pursue things for no other reason.

This holds even against Nozick’s shallow declaration that the momentary displeasure someone would feel upon choosing a life of being deceived for themselves would be outweighed (in utilitarian fashion) by the ensuing life full of fake pleasures. This forgets self-reflection is a thing. Think it through: you could be this person right now. So it is not the case that displeasure at choosing such a condition would be limited to when the choice was made. The moment you lived at all self-reflectively you would continue to be horrified by the prospect that everyone you know is a fake automaton and your entire life is a lie. As Gary Drescher points out in Good and Real, the only way to avoid being perpetually stuck in that dissatisfaction state (after already accounting for the scenario’s inherent improbability) is to assure yourself that you would never have chosen such a thing; which requires that you be the sort of person who wouldn’t. Ergo, you’d never choose such a condition. Hence, your answer to this scenario is, “No.”

The heart, I think, of Nozick’s intellectual failure here is to confuse pleasure with its causes. He wants to think that the causes matter more than the effect. But that isn’t the case. The causes only matter because of the effect; which is precisely the conclusion he is trying to refute. Yet his own experiment, properly conducted, only reinforces that conclusion; it doesn’t undermine it, as he mistakenly believes. There is really only one useful takeaway from all this, which gets at least somewhere near a point Nozick wants to make: that merely feeling pleasure, divorced from all other cognitive content, is not a sustainable human goal. We would, ultimately, find that dissatisfying, and thus it would cut us off from much more enjoyable satisfaction states. I discussed something like this recently in The Objective Value Cascade: if we were rationally informed of all accessible pleasure-states, and in one case all we would have is the contextless feeling of pleasure, while in the other case we would have context-dependent pleasures, we would work out at once that the latter is the preferable world (our future self there would win any argument with our future self in the other as to which future self we now would want then to be). I think this is sort of what Nozick wants to get as the answer. But he mistakenly leaps from that to “pleasure is not our only reason for doing things,” which is a non sequitur. He has confused “we will prefer more to less pleasurable states” with “we do not pursue pleasure-states.”

The error in his experiment thus turns, really, on the role of deception. Nozick can’t even superficially get to his conclusion without it. As I just noted, apart from deception, we are already in his experience machine: all pleasure is a virtual invention of an experience machine (our brain, presently). So that can’t get us to his conclusion. His conclusion thus depends on the assumption that something remains intolerably fake, and there really is only one thing that could be (as I just noted): fake human interaction, tricking us into thinking we are experiencing interactions with real people, when we aren’t. He mentions other things (like achievements, e.g. my example of “really” winning at poker vs. being tricked into thinking you have), but even after we set aside all the counter-examples disproving this (e.g. people actually do enjoy and thus pursue playing poker virtually, even against machines), the remaining cases still all boil down to the same analysis: once you become aware that it’s fake, the pleasure is negated, and once given the choice, you would not choose the fake option; because the real option is more pleasurable. And you know this, so you know you can’t have chosen it in the past, and therefore you won’t choose it in future. That Nozick can conceive of tricking people into not knowing this, does not get him the conclusion that pleasure is not why we do things. All it does is reveal that we can produce pleasure by deception; but it still remains the reason anyone is doing anything.

The convoluted way Nozick is trying to get around this inescapable revelation is by contriving a Magic Pill scenario, in effect asking whether you would choose now to be deceived in the future, e.g. tricked into thinking someone genuinely loves you rather than is conning you, merely to achieve the corresponding pleasure-states of believing someone genuinely loves you. No rationally informed person would choose to do that, and for the quite simple reason that it displeases them to think of themselves now being in that state in the future. And this is not just experienced upon choosing, as Nozick incorrectly asserts; as I just explained, you will be existentially confronting this possibility, and its undesirability, every day of your life. Thus pleasure is still defining the choice.

Bad philosophy comes in many forms. Here, we see it characterized by: (1) reliance on fallacious and self-contradictory reasoning (rather than carefully burn-testing your argument for such, and thus detecting and purging any such components); (2) not carrying out a thought experiment (especially one’s own) as actually described, or not describing the experiment you actually want to run; and (3) starting with a pre-determined conclusion, and contriving an elaborate argument by which to rationalize it, rather than doing what we should always do: trying, genuinely and sincerely and competently, to prove your assumptions false, and only having confidence in those assumptions when that fails (see Advice on Probabilistic Reasoning).

For instance, here, Nozick wants to think that because cognitive content matters to whether something is pleasurable (which is true), therefore something other than pleasure is what we actually pursue (which does not follow). But this can be tested, by simply removing that single variable from the control case: if you could choose between an unknowingly-fake love affair that gave you pleasure and a genuine love affair that didn’t, would you choose the latter? The rationally informed answer is always going to be no. Someone might answer yes, by thinking “at least in the genuine case I’ll have some genuine pleasures,” but then they’d be doing the experiment wrong, because the stated condition rules out that outcome. You are supposed to be comparing two conditions whereby the second contains no produced pleasures, not “some.” Bad philosophy. Good philosophy would apprehend this and thus correctly run the experiment. And its result would disprove the null hypothesis that “we don’t pursue things for pleasure.” This would not be the result Nozick wants. But truth does not care what we want.

More to the point of getting at least a usable conclusion in this subject, someone who was posed the binary options “an unknowingly-fake love affair that gave you pleasure or a genuine love affair that didn’t,” most people would apprehend an excluded middle here: why can’t we have a third option, a genuine love affair that pleases us? (Or any other genuine state that does.) Obviously that’s the thing someone would choose over both other options, if it were available. And there is no other option left to consider in the possibility-space (e.g. “a genuine love affair that made you miserable” would still satisfy condition two, “a genuine love affair that didn’t give you pleasure,” as would “a genuine love affair that brought you neither pleasure nor misery”). But this still disproves the null: the reason someone chooses “a genuine love affair that pleases us” over “an unknowingly-fake love affair that pleases us” is that our cognition of the difference brings us pleasure. It does so not only when we choose it, but also every moment we continue to enjoy the product of that choice. Because the only reason it brings us pleasure is our knowledge of its genuineness.

As I wrote once with regard to a different Magic Pill thought experiment:

Just ask anyone which person they would rather be right now: (A) a mass murderer without a memory of it, or (B) someone with reasonably accurate memories of who they are and what they’ve done. Would it disturb them to find out they were (A)? Yes. So why would they choose to be that person? Indeed, when would they ever? 

The same follows for Nozick’s machine. If what we are really talking about is a machine not that merely produces pleasure without context or creates actual contexts similar to those in the real world (like video games aim to do), but a machine that deceives us into experiencing a pleasure we would not experience if we knew the truth (a machine that convincingly lies to us about the contexts we are in), the question then is no longer whether we pursue objects for pleasure, but whether we would be pleased or not to be deceived into pleasure-experiences (now or ever). The answer to that question is: no, this would not please us; hence we would not choose it. This is why, I suspect, 76% of philosophers did indeed answer “No” to the question. But that doesn’t get us to Nozick’s conclusion that pleasure is not what we pursue objects for. And insofar as we see it that way (and thus, run the experiment differently than it was described), I would agree with them and likewise have answered “No.” Thus, how one answers this question depends entirely on whether you correctly run the experiment as described, or not. Which you cannot tell if anyone has done merely from what their answer is. And this is what makes this thought experiment bad philosophy.

I’ll reiterate in the end that we can throw one bone to Nozick, which is that his intuition was correct that we do not find contextless pleasures to be comparable to contexted ones. People generally don’t want to just stimulate the pleasure centers of their brain; they want something more, because they can work out that it is far more satisfying, for example, to interact with real people than fake ones, and with explorable worlds than scripted ones. Which simply translates into Nozick’s vocabulary as “they find that more pleasurable.” Which means a machine that, as stipulated, can give them that pleasure, can’t be doing it by deception. Whereas any machine that can’t do that, won’t be preferred to the real world by any rationally informed decision-maker—simply because it can’t give them the pleasures they want, not because they pursue aims for reasons other than the pleasures they can derive from them.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading