I will be answering in my next article the new questions posed in the 2020 iteration of the PhilPapers survey (a new development I just wrote about). But one of those new questions requires a separate article on its own: the one written, “Experience machine (would you enter?): yes or no?” This refers to a poorly articulated thought experiment contrived and badly run by Robert Nozick. Philosophers have a tendency to hose their own thought experiments. This is no exception. So it is difficult to really use the data on this, because I don’t know if the PhilPapers respondents are doing the same thing Nozick did, and misconceiving and thus incorrectly running the experiment, thus dictating their answer differently than if they ran the experiment correctly. So the results here are largely useless, not least because it is not explained why they answered as they did, which is the one thing Nozick was trying to discern.
The basic gist of the experience machine is to ask, if you could go live in a simverse where you could experience all the same pleasures as in the realverse, would you? That isn’t the way Nozick words it, but it distills what he is actually describing; and rewording it thus I believe would change people’s answers, yet without changing what Nozick actually meant (or had to mean, given the argument he tries to make from it), which illustrates how he is inadvertently manipulating results with semantics rather than more clearly describing the scenario he wants to explore to get an accurate and useful answer. Crucial to his experiment is that the “experience machine” can reproduce all pleasures of the real world (so that there is no pleasure-access reason to not plug into it). But this crashes into a tautology when pleasures are only caused by believing certain things are real.
Nozick would certainly try to salvage his intent by specifying, let’s say, that you would be alone in your simverse, and thus all pleasures deriving from interacting with other people there would be fake. But this would undermine his argument: if you know it will be fake (as the experiment requires that you do, certainly at the time of making the choice, as in all Magic Pill thought experiments, cf. my old discussion under “The Magic Pill Challenge”), you will be incapable of deriving the same pleasure from it, yet that is a required condition of the experiment. Hence the machine can’t produce “the same quality” of pleasures, and thus it won’t meet the one essential condition his experiment, and entire argument, requires. Because apart from the question of the reality of human interaction, we already know from VR machines today that at the level of sophistication Nozick’s machine is supposed to obtain, there is no pertinent difference between, for example, climbing a real or a virtual mountain. In both cases you are accomplishing a task by interacting with a presented environment to within your developed abilities.
Really the difference is even less substantive than that. Because there actually literally is no pertinent difference between, for example, “fake simverse sugar” and “realverse sugar,” because this is not a thought experiment: we actually are in that relevant simverse already. Human experience is a simulation constructed by the brain. “Sweetness” does not exist outside our minds; sugar molecules have no such property. It is entirely a fabricated experiential response. Likewise every other aspect of pleasure. And it’s actually impossible for it to be any other way. Experiential pleasure does not and cannot exist in the physical world but as the computed product of information processing: i.e. of an experience machine, in our case the brain. So in actual fact, we are already in Nozick’s “experience machine.”
This would mean the actual options for Nozick’s thought experiment really are: would you prefer to live outside your brain (which is physically and probably logically impossible: experience does not exist anywhere outside some “experience machine” or other) or inside it? No rationally informed person would answer anything other than “inside it, obviously.” Because the alternative is literally choosing to be dead—to unplug from all experiences whatever. Nozick did not realize (nor evidently have most philosophers answering this question realized) that he is simply describing our current situation: we live consciously only because we live inside an experience machine, of just exactly the sort he describes, and we could not live consciously any other way. Hence there is no pertinent difference between, for example, Los Angeles out here, and Los Angeles inside Grand Theft Auto: both have fixed, explorable parameters, from geography to resources to sights and venues; both can be interacted with and changed; and so on. So the only pertinent difference between a simverse and a realverse is merely one of real estate. Is it better there? That’s the only question that matters.
It is clear that Nozick wanted to intend his “experience machine” to be a deceptive device, whereby you aren’t even making decisions but being tricked into thinking you are, and people don’t exist there, you only think they do. And so on. But he doesn’t clearly frame the experiment in those terms—and couldn’t, because it would expose a fatal flaw in it, insofar as it’s supposed to prove something he wants about why people do things. So this is bad philosophy. Running the experiment correctly (the machine can reproduce any real-world pleasure), my answer for PhilPapers here would have been “yes,” a genuine simverse would be better real estate, so I’d certain immigrate, along with 13% of other philosophers apparently, possibly the few who actually noticed what I did about all this; the other 76% are being snowed by Nozick’s faulty semantics, and really answering a different question than we are: whether they’d consent to be deceived into pleasurable cognitive states—as opposed to merely simulated ones, which is not the same thing. But Nozick’s description of the experiment never mentioned being deceived, but hinges entirely on knowing what’s really happening and choosing it anyway. Assuming deception is happening (and thus being chosen) is to run the experiment wrong—or to run a different experiment than described.
The whole experiment should thus be trashed as framed and the actual questions Nozick wanted to answer should have been asked instead: do we prefer mere pleasure as an experience disconnected from what produces it, or does the pleasure we derive from something depend on our beliefs about it being factually true? This is a more interesting question, and more easily answered. Though it is properly a scientific question under the purview of psychology, and not really a question philosophers should claim to be able to answer on their own, there’s enough science to back an answer here: we do indeed derive pleasures from our cognition of circumstances that cannot be obtained without it.
Nozick wants to separate the mere experience of pleasure (like an arbitrary orgasm machine) from the cognitive side of understanding what is producing the pleasure (like sex with an actual person, with whom you are sharing an understanding of their mental states, desires, and pleasure-experiences), so as to argue that, because these are not one-to-one identical, our motivation to do things is not simply pleasure, and therefore “utilitarianism is false.” But this is a string of non-sequiturs. That the cognitive side of what causes a pleasure matters, does not replace pleasure itself as the goal; it merely constrains what things will cause us pleasure (or pleasures of certain kinds and degrees). So the first step in his reasoning fails. You can’t separate pleasure from cognitions about its cause; cognitions about its cause are a source of pleasure. And no form of utilitarianism disregards this fact. So the second step in his reasoning also fails.
Basically, as folk would say, “You can’t get there from here.”
To be clear at this point, I also find all this talk about “pleasures” bad form anyway. What we really prioritize are satisfaction states; which is a pleasurable state in and of itself, but all pursuit of individual pleasures is derivative of this, not fundamental. We pursue pleasures in order to obtain satisfaction states (and there can of course be greater and lesser satisfaction states, hence states that are “more satisfying” than others). Thus “desire utilitarianism” is closer to a correct evaluation of human axiology than traditional utilitarianism, meaning Nozick isn’t even on the right path to any pertinent conclusions about anything here, even from the start. But we can set this aside here, because the same conclusions follow (or don’t) even if we replaced his “pleasures” with our “satisfaction states,” so for convenience I will continue in his idiom.
Like all bad philosophy, Nozick constructed his experiment to rationalize some conclusions he already started with and wanted to be true (in effect, that “pleasure is not our sole reason for doing things, therefore something else motivate us”), which are represented in his given reasons for “not” wanting to be in an experience machine:
- We supposedly want things to be real, not just pleasurable (e.g. we want to “actually” win at a game of cards, not merely feel or falsely remember that we did);
- We supposedly don’t want to just be floating in a tank or something (e.g. we want our physical bodies at the card table, or to be actually heroic; we don’t want to virtually be there, or to fake it);
- Simverses are more limited than realverses (e.g. there might be things in the realverse we can discover or do that weren’t thought of so as to be made possible in the simverse).
But (1) does not contradict the thesis that pleasure is what we seek, as it only ramifies what we will find pleasurable; (2) is demonstrably false (people enjoy “sitting at virtual tables” so much that an entire multi-billion-dollar industry thrives on it: we call them video games, in which we can genuinely “be” honest, clever, heroic, anything we like); and (3) is contradicted by his own thought experiment: he himself stated as a condition that there can be no pleasures accessible in the realverse not accessible in the simverse; in fact his entire experiment depends on that condition. So (3) cannot be a reason not to plug into the machine he described, as it by definition can never be an outcome of doing so. In my experience, Nozick is a rather bad philosopher (this isn’t the only example). Indeed, he has also confused in case (3) yet again (a) a ramification of what we find pleasurable with (b) a reason other than pleasure to pursue something. So he simply isn’t really getting the conclusions he wants; yet, ironically, he is deceiving himself into thinking he has. He’s stuck in his own experience machine.
Of course Nozick may have wanted to specify instead an experiment where, really, the main concern was with whether merely the pleasure alone mattered (the experience of it), such as we derive from human interactions (the only thing that would be meaningfully “absent” in his scenario, as our enjoyment of virtual worlds in video games now proves), or if it mattered that the interactions be real. For example, as with any Magic Pill thought experiment, the notion is whether you would choose to live a lie if it could be guaranteed you’d never know it (though obviously you must know it at the time you choose this state, like Cypher in The Matrix when he asks Agent Smith for this very thing). That does not actually address Nozick’s interest, because if the “comparable pleasure” requires you to falsely believe you are interacting with real people, then his claim that our goal is not pleasure is not supported; all he has shown is that we do set pleasure as our goal, and can merely be tricked into it.
That is uninformative. Think of a romantic relationship, which brings you great pleasure and which you pursue for that very reason, but then you discover it was all a lie, and they were conning you. It does not follow that, therefore, you were not pursuing that romance for pleasure. That conclusion is a non sequitur. So, too, with “Nozick’s” experience machine. It simply can’t get the results he wants. And he fails to detect this, because he can’t even run his own experiment correctly—forgetting that his own description of the experiment rules out his third reason for refusing to plug in to it, not discovering from self-reflection that simulated experiences entail constructing the same explorable environments and the same opportunities for realizing the person you want to be as the real world provides thereby ruling out his second reason for refusing to plug into it, and not realizing that cognition of a state is itself a source of pleasure, or that the two are not properly separable, eliminating every other reason for not wanting to plug into it. One does not pursue the cognition, if the pleasure does not result; and fooling someone into the cognition so as to produce the corresponding pleasure would be rejected as unpleasurable by anyone aware that is happening. Deceiving someone into feeling a pleasure does not demonstrate they pursue anything for reasons other than pleasure; to the contrary, it only demonstrates more assuredly that they pursue things for no other reason.
This holds even against Nozick’s shallow declaration that the momentary displeasure someone would feel upon choosing a life of being deceived for themselves would be outweighed (in utilitarian fashion) by the ensuing life full of fake pleasures. This forgets self-reflection is a thing. Think it through: you could be this person right now. So it is not the case that displeasure at choosing such a condition would be limited to when the choice was made. The moment you lived at all self-reflectively you would continue to be horrified by the prospect that everyone you know is a fake automaton and your entire life is a lie. As Gary Drescher points out in Good and Real, the only way to avoid being perpetually stuck in that dissatisfaction state (after already accounting for the scenario’s inherent improbability) is to assure yourself that you would never have chosen such a thing; which requires that you be the sort of person who wouldn’t. Ergo, you’d never choose such a condition. Hence, your answer to this scenario is, “No.”
The heart, I think, of Nozick’s intellectual failure here is to confuse pleasure with its causes. He wants to think that the causes matter more than the effect. But that isn’t the case. The causes only matter because of the effect; which is precisely the conclusion he is trying to refute. Yet his own experiment, properly conducted, only reinforces that conclusion; it doesn’t undermine it, as he mistakenly believes. There is really only one useful takeaway from all this, which gets at least somewhere near a point Nozick wants to make: that merely feeling pleasure, divorced from all other cognitive content, is not a sustainable human goal. We would, ultimately, find that dissatisfying, and thus it would cut us off from much more enjoyable satisfaction states. I discussed something like this recently in The Objective Value Cascade: if we were rationally informed of all accessible pleasure-states, and in one case all we would have is the contextless feeling of pleasure, while in the other case we would have context-dependent pleasures, we would work out at once that the latter is the preferable world (our future self there would win any argument with our future self in the other as to which future self we now would want then to be). I think this is sort of what Nozick wants to get as the answer. But he mistakenly leaps from that to “pleasure is not our only reason for doing things,” which is a non sequitur. He has confused “we will prefer more to less pleasurable states” with “we do not pursue pleasure-states.”
The error in his experiment thus turns, really, on the role of deception. Nozick can’t even superficially get to his conclusion without it. As I just noted, apart from deception, we are already in his experience machine: all pleasure is a virtual invention of an experience machine (our brain, presently). So that can’t get us to his conclusion. His conclusion thus depends on the assumption that something remains intolerably fake, and there really is only one thing that could be (as I just noted): fake human interaction, tricking us into thinking we are experiencing interactions with real people, when we aren’t. He mentions other things (like achievements, e.g. my example of “really” winning at poker vs. being tricked into thinking you have), but even after we set aside all the counter-examples disproving this (e.g. people actually do enjoy and thus pursue playing poker virtually, even against machines), the remaining cases still all boil down to the same analysis: once you become aware that it’s fake, the pleasure is negated, and once given the choice, you would not choose the fake option; because the real option is more pleasurable. And you know this, so you know you can’t have chosen it in the past, and therefore you won’t choose it in future. That Nozick can conceive of tricking people into not knowing this, does not get him the conclusion that pleasure is not why we do things. All it does is reveal that we can produce pleasure by deception; but it still remains the reason anyone is doing anything.
The convoluted way Nozick is trying to get around this inescapable revelation is by contriving a Magic Pill scenario, in effect asking whether you would choose now to be deceived in the future, e.g. tricked into thinking someone genuinely loves you rather than is conning you, merely to achieve the corresponding pleasure-states of believing someone genuinely loves you. No rationally informed person would choose to do that, and for the quite simple reason that it displeases them to think of themselves now being in that state in the future. And this is not just experienced upon choosing, as Nozick incorrectly asserts; as I just explained, you will be existentially confronting this possibility, and its undesirability, every day of your life. Thus pleasure is still defining the choice.
Bad philosophy comes in many forms. Here, we see it characterized by: (1) reliance on fallacious and self-contradictory reasoning (rather than carefully burn-testing your argument for such, and thus detecting and purging any such components); (2) not carrying out a thought experiment (especially one’s own) as actually described, or not describing the experiment you actually want to run; and (3) starting with a pre-determined conclusion, and contriving an elaborate argument by which to rationalize it, rather than doing what we should always do: trying, genuinely and sincerely and competently, to prove your assumptions false, and only having confidence in those assumptions when that fails (see Advice on Probabilistic Reasoning).
For instance, here, Nozick wants to think that because cognitive content matters to whether something is pleasurable (which is true), therefore something other than pleasure is what we actually pursue (which does not follow). But this can be tested, by simply removing that single variable from the control case: if you could choose between an unknowingly-fake love affair that gave you pleasure and a genuine love affair that didn’t, would you choose the latter? The rationally informed answer is always going to be no. Someone might answer yes, by thinking “at least in the genuine case I’ll have some genuine pleasures,” but then they’d be doing the experiment wrong, because the stated condition rules out that outcome. You are supposed to be comparing two conditions whereby the second contains no produced pleasures, not “some.” Bad philosophy. Good philosophy would apprehend this and thus correctly run the experiment. And its result would disprove the null hypothesis that “we don’t pursue things for pleasure.” This would not be the result Nozick wants. But truth does not care what we want.
More to the point of getting at least a usable conclusion in this subject, someone who was posed the binary options “an unknowingly-fake love affair that gave you pleasure or a genuine love affair that didn’t,” most people would apprehend an excluded middle here: why can’t we have a third option, a genuine love affair that pleases us? (Or any other genuine state that does.) Obviously that’s the thing someone would choose over both other options, if it were available. And there is no other option left to consider in the possibility-space (e.g. “a genuine love affair that made you miserable” would still satisfy condition two, “a genuine love affair that didn’t give you pleasure,” as would “a genuine love affair that brought you neither pleasure nor misery”). But this still disproves the null: the reason someone chooses “a genuine love affair that pleases us” over “an unknowingly-fake love affair that pleases us” is that our cognition of the difference brings us pleasure. It does so not only when we choose it, but also every moment we continue to enjoy the product of that choice. Because the only reason it brings us pleasure is our knowledge of its genuineness.
As I wrote once with regard to a different Magic Pill thought experiment:
Just ask anyone which person they would rather be right now: (A) a mass murderer without a memory of it, or (B) someone with reasonably accurate memories of who they are and what they’ve done. Would it disturb them to find out they were (A)? Yes. So why would they choose to be that person? Indeed, when would they ever?
The same follows for Nozick’s machine. If what we are really talking about is a machine not that merely produces pleasure without context or creates actual contexts similar to those in the real world (like video games aim to do), but a machine that deceives us into experiencing a pleasure we would not experience if we knew the truth (a machine that convincingly lies to us about the contexts we are in), the question then is no longer whether we pursue objects for pleasure, but whether we would be pleased or not to be deceived into pleasure-experiences (now or ever). The answer to that question is: no, this would not please us; hence we would not choose it. This is why, I suspect, 76% of philosophers did indeed answer “No” to the question. But that doesn’t get us to Nozick’s conclusion that pleasure is not what we pursue objects for. And insofar as we see it that way (and thus, run the experiment differently than it was described), I would agree with them and likewise have answered “No.” Thus, how one answers this question depends entirely on whether you correctly run the experiment as described, or not. Which you cannot tell if anyone has done merely from what their answer is. And this is what makes this thought experiment bad philosophy.
I’ll reiterate in the end that we can throw one bone to Nozick, which is that his intuition was correct that we do not find contextless pleasures to be comparable to contexted ones. People generally don’t want to just stimulate the pleasure centers of their brain; they want something more, because they can work out that it is far more satisfying, for example, to interact with real people than fake ones, and with explorable worlds than scripted ones. Which simply translates into Nozick’s vocabulary as “they find that more pleasurable.” Which means a machine that, as stipulated, can give them that pleasure, can’t be doing it by deception. Whereas any machine that can’t do that, won’t be preferred to the real world by any rationally informed decision-maker—simply because it can’t give them the pleasures they want, not because they pursue aims for reasons other than the pleasures they can derive from them.
Good article, and I agree Nozick’s argument doesn’t work. But is there a different reasonable argument that you’re aware of that adequately supports the conclusion that pleasure is not our sole reason for doing things? Or is motivational hedonism our only true reason for doing anything?
I can think of one thought experiment where personal pleasure/pain avoidance might not be the only motivation: Imagine a Sophie’s Choice scenario in which a parent must choose one child to live, and one to die. However, in this case, child A brings the parent more personal pleasure, but child B has a better chance of accomplishing great things (as a scientist, artist, etc.). Could you see a case where the parent would select child B to survive, even though it would bring that parent greater personal grief in the short and long run to lose child A? Or would every parent always choose to save child A for their own pleasure motivation?
I am not aware of any argument that concludes without fallacy from actual facts that satisfaction-states are not the sole reason anyone does anything. I am not sure such a conclusion is even logically possible. It is inherently self-refuting to suggest anyone would pursue a less satisfying state over a more satisfying one; even people who find satisfaction in dissatisfaction, are in that very fact pursuing the state most satisfying to them.
If we swap “satisfaction states” out for “pleasures” maybe you can get to a different conclusion, by trading on some difference between the two you have contrived in defining them, but that would be little more than a semantic outcome. You can make anything into anything else by simply redefining every word the way you need to get the result you want. But in the end you can’t change what things are by changing what you call them. This is why I added the caveat that I don’t think the word “pleasure” is well-chosen here. It’s too vague and variable in meaning to carry a coherent conclusion.
Case in point:
In your scenario, what possible reason would the imagined Sophie choose B other than that she deemed that outcome more pleasing (more satisfying) to her than the other? In other words, she would have to believe (falsely or not doesn’t matter for the point) that she will be more satisfied knowing B’s life outcome has occurred than she will suffer from the loss of A; if she didn’t, she would have no reason to prefer it (and with no reason to prefer it, no motive to ever choose it).
In short, you cannot say she would be well-motivated to choose the least satisfying outcome. To the contrary, from one choice to the other in the hypothetical all we are doing is changing what she deems more satisfying. So what she does is still in the end what she deems more satisfying; indeed, that appears in fact to be tautologically the case. To desire a thing more just is to believe it will satisfy you more. And choosing a thing just is the act of desiring it more.
There is a side problem with such scenarios however, which is that the choice itself can be self-rationalizing, i.e. the grief at losing A will actually be reduced by commitment to the outcome of B (e.g. by repeating the very consolation that motivated the choice in the first place). In other words, the choice itself mediates its own differential satisfaction.
This is why counterfactuals require more careful analysis than most people think. For instance, you have to remember only the differential matters. The negative is not “the grief” at losing A, because there would also be grief at losing B on the alternative choice; there is only an actual difference if the grief at losing A would be greater than the grief at losing B. But why would the grief at preventing the “great things” of B be “less” than the grief of losing some more direct enjoyments of A? Particularly as they could be replaced with other children or friends…or even a renewed relationship with B.
Remove one thing in a counter-factual, and something tends to move into its place. It is rarely the case that you remove a piece, and the causal spot it occupied stays empty. This comes to the principle of opportunity cost; and in that respect there are more variables than just “A” and “B” in cases like this. You would lose the company of either, either way; so if all that differs is that one somehow will bring you more goods of some kind than the other, how is it that those goods can’t simply be made up in some other way? And indeed, why would someone not then fill that gap with other goods, quite deliberately as a consequence of the original choice? And the same is the case the other way around (if you choose A over B). So it isn’t simply “lose A or lose B.” The outcomes are much closer in merit after all transformations are considered.
No matter what you try to do, all of these considerations seem always to end at the same foundation: navigating to the most satisfying choice available. In every case, one is always simply weighing different degrees of pleasure, and only choosing at random between them when indeed their degrees are actually equal (or are equal “so far as you know”). That “B will do great things and I care more about that than enjoying the company of A over B” is simply another description of what pleases someone. If it didn’t please them, then they wouldn’t care more about it, and so wouldn’t choose it.
Hence I don’t think there is any logically possible way to escape some fundamental hedonism as the only existing motivator. Every attempt to get around it just ends up inserting new sources of satisfaction; it never gets away from satisfaction itself being the only actual motivator.
I find Nozick to have the same kind of astonishingly dull (given his obvious intelligence and reach of thought) set of whoppers of conclusions as right-wing libertarian philosophers generally do.
In this case, the way I think about it is this. What if I found out, right now, that everyone around me was a p-zombie (assuming that p-zombies can happen, which I agree with you is logically flawed, but let’s say that they were very good automata this whole time)? That would suck. And if I found out the universe was not real? That would suck. But I could hypothetically find that out now. What would suck then would be the fact that I discovered something about my experiences. The pill doesn’t change that. Worse, I think a rational person would conclude (indeed, doing this is a good way of not being constantly so angry) that I felt the good times sincerely, and if I had no way of knowing I was being duped it was okay that I was, so while I should change my behavior now it’s okay that I enjoyed myself before. (And how I’d change my behavior is… unclear).
In other words, if someone was in this machine and found out that a certain thing they could have done in the real world they couldn’t do here, and they didn’t know that the real world would allow it, they would just have to assume that it wasn’t a possibility. I don’t know what’s possible to experience. I can’t lose out on things I don’t know about. Again, these conditions accrue to the real world, so they can’t really show anything.
I haven’t read that part of Nozick, but I’m guessing he didn’t really put the boots to his thought experiment by, say, imagining if people might pick this machine if they knew that it had most of the obtainable pleasures in or world and a ton more. If many would and I suspect the answer is “Yes” (especially if we can stipulate that leaving doesn’t leave loved ones behind, which is another pretty important part of the experience machine that I suspect he disguised from the calculation), then all he’s shown is that the utility calculation people do is complicated, not that pleasure (or more accurately satisfaction state) isn’t the intended outcome of the calculation.
There’s another rub. Let’s say you tell me that you can give me a pill where I can be a superhero in another world protecting people. My first thought isn’t “Are those people real”? That only dictates if I think about it as a duty or a game. And it will be more meaningful to me, though perhaps a lot less fun, if they are real people. (Which, again, shows another way this thought experiment is hosed). My first thought is “Did you just poof those people into existence, you crazy person”? That is, does the machine take me to a world that exists (in which case going to protect things there that can suffer becomes morally obligatory), or does it make a new one? If it makes a new one, then most non-psychopaths will tell you “No” if there is the tiniest chance the sentient beings in there are remotely real (that is, even if they’re not as sapient as us they may experience pain, so we are again back to p-zombies) because you just blinked into existence life forms that are suffering. I certainly won’t have you make stuff suffer for me so I can then clean it up.
Good thought experiments, like the Trolley Problem, force you to answer a rarefied question where the rules are coherent even if unpleasant. In bad ones, people are actually debating the rules of the experiment. This is a bad one.
It’s another in a long example of the failure to recognize OR reconcile subconscious (SC) urges with conscious thought or mind (CM). All social scientists fail to do this as do most humans.
I would argue both that social scientists, even behavioral economics (where economists have been for ideological reasons committed to rational choice perspectives for so long), have actually had many members who deeply attentive to non-conscious aspects of behavior and cognition, and that that isn’t the problem here. Notice how my point to Nozick proceeded entirely from asking basic questions about the thought experiment, questions that actually reveal the problem with it. Those were all consciously held ideas. It is wholly possible that subconscious elements refute the experience machine in other ways (maybe we need to be subconsciously convinced that we are interacting in real environments because we are predisposed to recognize and be concerned about deception and fraud, which is a prerequisite for us enjoying anything, but that isn’t an indication that satisfaction isn’t our goal), but it also is refuted by conscious thought.
Do any scholarly works exist that discuss whether or not an existence of only pleasure.. even contextualized pleasure is for lack of better word, rational? Meaning can we experience pleasure without having some absence of pleasure? Can pleasure-states exist without anti-pleasure states?
This may be off topic but it’s where my mind followed.
By asking whether a pleasure-only state is “rational” (and I understand you were searching for the right work there), you might be pursuing the wrong path. There is nothing contradictory (logically) about a pleasure-only state. However, there are certainly biological limitations along the same lines. And there are certainly psychological and pharmacological studies that address that idea (though probably not to the extreme you may be going for – for ethical/practical reasons).
Joe, I am not sure what your question is.
If you mean, can we experience pleasure and displeasure at the same time, I should think so. Just think of someone with a tooth-ache eating a delicious meal. One might then evaluate the net value as the differential between them: someone might stop then eating the meal as the pain overwhelms, i.e. exceeds, the pleasure, or continue the meal because the reverse is the case; e.g. think of a mild tooth-ache vs. a severe one.
But I don’t see the connection of this observation to my article.
Or if you mean the opposite, whether any pleasure experienced entails the absence of other or greater pleasures, then also I should think yes; and still also don’t see the connection.
Or if you mean to ask whether experiencing pleasure “is rational,” then that’s a category error (experiencing is experiencing; it is not a logical relation or inference).
Or if you mean to ask whether enjoying pleasure (as distinct from merely experiencing it) “is rational,” that would depend on what you mean by “rational.”
In the usual sense, any behavior “is rational” that comports with the reality and conduces to the agent’s overall best interests (which includes moral interests), insofar as the behavior really is either what one ought do in such circumstances or is among any behaviors equal in such degree but otherwise interchangeable with each other. Which all gets into one’s analysis of imperative propositions; I cover the logic of these cases in my chapter on moral theory in The End of Christianity.
Or if you mean to ask whether it is even possible to experience contextless pleasures, I would say for the purposes of the distinction made in the article the answer is yes. Think of merely “riding” inside someone else’s mind as they climb a cliff vs. actually climbing the cliff yourself. Both can entail pleasures (the former even becomes an actual addictive drug in the films Brainstorm and Strange Days), but a person for whom doing it themselves is the source of the pleasure would thereby observe the latter is the greater pleasure, and one which the former deprives them of (think of the ending of Being John Malkovich as an illustrative example of how that, generalized, becomes hell).
The more usual example used in the literature is an orgasm machine or sex doll vs. actually having sex with a real person (hence the example I included). In the former case, the apparatus (the context) is irrelevant to the pleasure (one does not care in such a case what is causing the pleasure, only that the pleasure is being caused); whereas in the latter case, the apparatus (the context) is essential to the pleasure (it is the very thing from which one is deriving the pleasure).
That is why to get the latter without the actual context requires deception; whereas the former instance does not; one does not have to be “tricked” into “not knowing” it’s just an orgasm stimulator causing the result, and still one can enjoy the resulting pleasure—it just will usually be deemed a hollow and thus insufficient pleasure compared to the alternative. In short, real sex is more fun. And that is why people prefer it.
Contextualized Pleasure > Contextless Pleasure. I understand this.
What if our context is only other pleasure states?
To try and better formulate my question. Can someone just always be in a pleasure state? Would they really appreciate the pleasure without some context of non-pleasure at some other point in life? In an existence like this I could imagine the less pleasurable states maybe seeming like the non-pleasure states at some point and only the most euphoric of states being even recognizable as pleasure.
To use the orgasm machine example, it would seem at some point if you were just constantly orgasming you would start to experience it as torture. And even if these were many different contextualized pleasures just constantly with no counter balance it may be very similar?
I grew up being taught that someday in the future we would live in a Paradise Earth where God would provide everything for us and there wouldn’t be any sadness. Everyone would just be happy all the time. As I’ve taken some time to analyze this concept, something about it just seems empty. A life with no struggle and no sadness at all, although happy, doesn’t feel like it would be as fulfilling.
Again sorry if this is off topic or just doesn’t logically follow your post.
This idea has a lot of credibility, but I don’t think it’s particularly compelling, either psychologically or philosophically.
Philosophically, when I think about what happiness means, it’s not some kind of state that comes about from some kind of conscious measurement. I’m not holding it up against something. A true state of pleasant serenity seems to be one where my mind isn’t comparing anything at all.
Yes, of course I feel a great moment of relief when I’m done with some great trial or tribulation… usually. But sometimes that’s an annoyance that sticks around even when it’s done. We as humans can easily resent the fact that a day was ruined even after the event that ruined it has run its course.
Obviously our brain will do things like make us really appreciate a meal when we are incredibly hungry… but we get fairly close to that level of enjoyment from a meal that we enjoy when we are merely normally hungry that is a supremely well-prepared meal, especially if we also have good company.
Psychologically, if it were true that true happiness required having experienced suffering, we would expect to see some kind of correlation between, say, traumatization and long-term happiness, or some kind of indication that happiness and suffering tend to both be directly correlated (which would mean that you would see, when graphing total life satisfaction, that the people with the lowest lows would also see the highest highs).
But… while satisfaction research can’t get that detailed or longitudinal, what we do have shows that that’s just not how anything works. Obviously, people with extreme trauma tend to live lives that are materially worse off than those who haven’t had it, even given how much better we are at treating trauma these days (for those people lucky enough to get that kind of care). Lots of sources of suffering seem to stick around and cause permanent reductions. So, at the most, we can clearly see that the kinds of suffering that may help us understand happiness better by contrast would need to be the ones that can’t cause permanent mental or physical harm or discomfort.
We also know that there is a good reason to suspect that that feeling that we didn’t understand how good we had it until a bad event is a cognitive illusion. I highly recommend Gilbert’s Stumbling On Happiness. The happiness literature has surely advanced since he wrote it, but he still makes fairly clear that happiness is a complex thing. For one, I think his evidence shows pretty clearly what REBT and Buddhist thought have shown: our lack of happiness is rarely a result of bad events alone, or of too few good events, but our own relation to our lives and our cognitive structure around our present state. We get unhappy when our mind shows us a future that we think will be unhappy, even when that isn’t a likely future: Our brains model the future with consistent cognitive biases. If we can learn to not need to be happy with an imagined future state and be satisfied in the present, that really doesn’t need to be related to any past or future suffering or happiness: that satisfaction can stand on its own. In any case, we also know that our brain will take any bad thing that happens to us of a great magnitude and, whenever possible, give us rationalizations about how “it was the best thing that happened to us”, precisely because that’s the kind of hack one would put into a brain to make sure that awful events don’t cause us to be unable to have future happiness.
One can also look at cognitive biases like duration neglect and see that the way that we process negative events isn’t some objective accounting of how we felt at each moment but a very inaccurate gestalt.
So… I am deeply skeptical of the idea that we need dissatisfaction states to model satisfaction states.
Joe, your comment definitely seems on point to me. And important to work through here.
What Fred already said in reply is all correct, IMO.
But this is what I’d add:
Taking your question literally, it is factually never the case that “our context is only other pleasure states.” If your mind contained only pleasure states, then you wouldn’t exist to appreciate them. Remember, “you” are much more than a pleasure state, and “you” are always an inalienable context of anything you experience. And though it might be possible for only you to exist (ontological solipsism), so that “you” are the only context for your experiences, we have ample evidence that’s not the case. You also exist in a vast real-world context, which you also can’t get away from. So there is never any such thing as “our context is only other pleasure states.” And this matters because most (and all the greatest) satisfaction states come from our understanding of the larger context, which plays a key part in your orgasm machine example I’ll get to shortly.
But taking your question figuratively, i.e. where all that other context exists and is being acknowledged but what you mean to ask is a situation where, ceteris paribus, we never experience non-pleasure states, there are two points to observe.
First, there would still be degrees of pleasure state to navigate, and the evidence is conclusive that that would not somehow automatically recalibrate low pleasure states as displeasure states. It can do (for example, someone who experiences good wine after liking bad wine may grow to dislike bad wine thereafter), but doesn’t have to (for example, a rich person who grows too accustomed to luxury to enjoy “slumming it” actually could learn to enjoy the latter again, if they are willing to change the way they actively experience it, changing essentially how they think about things).
Second, the science seems pretty much to establish we don’t need displeasure states to enjoy pleasure states. Fred covered that adequately already. But it’s quite evident that someone who could switch between a life with no displeasure states and abundant pleasure states, and a life with the same quantity and degrees of pleasure states intermixed with displeasure states, ceteris paribus every rational agent would choose the former over the latter.
One might call up exceptions for the few cases where someone derives pleasure from certain displeasure states; but most displeasure states don’t produce that effect, and even if we rearranged the world so that they did, we’d just be tautologically back in a world with no actual displeasure states, and thus no dissatisfaction. For if a state is producing pleasure dependent on some discomfort—think of the pleasure of exhaustion after satisfying effort, or sexual masochists, and so on—it’s really just a pleasure state, full stop.
This is one of the reasons I prefer the vocabulary of satisfaction and dissatisfaction states; “pleasure” creates too many semantic conflations in practice, and what we are really concerned about is satisfaction anyway, not pleasure; the latter is just one means to the former, and given a choice, everyone would choose the former over the latter if they had no other option. So it’s more fundamental. And as I note in my article it better captures what Nozick wants to be talking about.
So, “To use the orgasm machine example, it would seem at some point if you were just constantly orgasming you would start to experience it as torture.” This I think is true (even as a matter of scientific fact). So the analysis has to be of why. And this gets us to the importance of context.
On the one hand, an actual person’s brain is programmed for evolutionary reasons to grow in displeasure from repeated singular pleasure states (for computational and survival reasons, that’s a lethal failure mode equivalent to the fate of Buridan’s Ass). In other words, constantly orgasming would progress into torture, because we are programmed to want to vary our activities and interests and not become fatally paralyzed in some minute obsession.
Thus, as a matter of contingent fact, humans just aren’t built to find satisfaction in such conditions; and no one could be who has decided survival and accomplishment, including love and friendship and the acquisition and application of knowledge—knowing and creating—are essential to satisfaction, because then that circumstance would again be a failure mode. Which is a context realization, i.e. knowledge of context causes that understanding and desire (see The Objective Value Cascade).
Which is why no one is likely to rewire their mind differently on that score, at least not informedly. But it’s in principle possible to. Hence…
On the other hand, what if we could rewrite the programming in our brain so that we never have this outcome, and orgasm machines, for example, become perpetually satisfying—in effect, choosing to be Buridan’s Ass?
This is where context-dependent satisfaction states become key to why no rationally informed person would do that. They would effectively be killing themselves, by locking themselves in an unevolving stasis in which no progress, accomplishment, knowledge, love, friendship, anything would ever be realized. It would just be repetitious pointless pleasure.
While it would be possible to erase one’s ability to know that (that’s what we’d have to do to put ourselves in a “Desire to Be Buridan’s Ass” mode), someone still possessed of the ability to know that would not choose that mode for that very reason: they’d know more and greater satisfaction states can be realized, and appreciated, outside that stasis. (This is more or less the point of my article on The Objective Value Cascade.)
BTW, this scenario is actually realized in fiction: an important scene in the film Brain Storm shows a man unwisely rigging himself (trapping himself) in an endless-orgasm machine, and somewhat depicts why he is better off outside of it (once rescued), once he is able to comprehend the different life states available to him. (Although the film depicts him benefiting from the experience, he still chooses not to resume it. And although the film ends with a pro-supernatural-afterlife message, this doesn’t detract from the philosophy worked out in the remainder of the film.)
Either way, the conclusion is not that we need displeasure states to enjoy pleasure states, but that we need variation of pleasure states (and intelligent control over their navigation) to unlock access to greater satisfaction states. In other words, the reason a perpetual orgasm machine is objectively dissatisfying is not because we need to experience dissatisfaction to appreciate satisfaction, but because it locks us out of far more satisfying states. And though it is technically possible to trap someone in such a state (by wiring them so that they never want to leave it), the fact that that is tantamount to killing them as a person (the outcome, of a pointless perpetual singular sensation absent anything else constituting life, barely differs from death) is reason enough not to trap oneself in such a state (and why evolution has already counter-wired us so that we don’t).
Which then gets us to your last question:
Note that if it were the case that some state felt empty, then you are tautologically describing it as dissatisfying. It therefore could not also be a satisfaction state at all, much less a maximal one. So one should analyze why it would be dissatisfying. Good philosophy requires first working up from particulars to generalizations and abstractions, not the other way around. So one should isolate specific particular things that would be dissatisfying, and experiment with making adjustments to remove them, to see what happens in your conceptual space.
For example, suppose you look for what would be dissatisfying and among various things you identify “I can never lose at a game,” everything is rigged so you always win, as among them. This would produce the context awareness that winning is then pointless; it signifies nothing and therefore there is no reason to derive any satisfaction from it. Here it is not that you need the availability of a dissatisfaction state, but that you need the context to support the satisfaction state (because it is the very thing that produces that satisfaction state).
So, to illustrate from my own life, I actually still enjoy playing games that I lose. Which is a form of attitudinal change. Of course I enjoy them even more when I win (though less when I always win, as that makes them too easy to be challenging, and unchallenging games are not satisfying to play), but here we are now talking about navigating a spectrum of satisfaction states. No net dissatisfaction state is needed to do this.
And I think this is the analysis that would come down for any other example. For example, I suspect sadness is actually in many (but not most) cases a satisfaction state. It can be satisfying to enjoy an occasional state of melancholy, provided it’s not too severe (actually and contextually—e.g. not severely felt and not caused by severe loss, which in a well regulated emotional system would be the same thing), and in that sense it might be dissatisfying to never experience such a thing.
At the same time, even when sadness is a dissatisfaction state, one might still need it to achieve other satisfaction states (e.g. we can learn from tragedy etc.), but this isn’t necessarily true, only contingently true (there are other ways to learn the same things; thus, that we can make lemonade out of lemons doesn’t mean we can’t just use a Star Trek replicator to make lemonade—if the replicator is available; hence the difference between necessary and contingent needs).
Even insofar as there might be some satisfaction states that depend on dissatisfaction states (e.g. to enjoy recovering from sadness requires first experiencing the dissatisfaction-state of sadness), it does not automatically follow that we need them (there are plenty of greater satisfaction states to pursue that require no such context), so even those exception cases would only sit in our life repertoire as minor options, not something we “need.”
And in any event, we’d not choose a life solely consumed by only those satisfactions anyway. There are many other greater satisfaction states we’d definitely want to be sure to include. And no rationally informed person would want severe versions of these dissatisfaction-satisfaction sequences if they could help it (hence the “human happiness is impossible without the mass torture and rape of children” line some Christian apologists will actually declare is most definitely bullshit).
So in the end, I would say we almost certainly need life to be varying and challenging and offer opportunities for accomplishment (progress, knowledge, creation, friendship, and so on), but this does not require anything to be net dissatisfying. Much less severely so.
To add onto Richard’s examples:
Some people may find that, after having had cheap wine, they’re ruined for the good stuff. To some extent, though, one can argue that this was a result of ignorance; they may actually not have had the palate to recognize why the cheap stuff was actually not as pleasant an experience it may have felt at first glance. Our pleasure states are contextual in the sense that we learn to evaluate our pleasure and experiences in the light of a growing body of knowledge.
But there’s actually lots of times where that doesn’t apply, and in fact cases where the expanded knowledge can even help you. Some people think of a Big Mac not as a bad burger but as a good Big Mac, and view it as a separate experience. Cheaper alcohol can sometimes have a lot to suggest it: It may be cheaper because it’s more overt in a particular flavor profile, but you may like that bluntness. Certainly, for a lot of applications, the cheaper alcohol is going to be better even putting aside cost concerns. A cheap champagne will probably be a better cocktail topper than an expensive one because the expensive one’s strengths of complex notes are just going to disappear. A really great example is pizza: even when you’ve had a really good wood-oven pizza or a great Chicago or Detroit-style, you may still end up craving a Little Caesar’s. An American who discovers a margherita pizza and other more sophisticated combinations doesn’t necessarily stop liking a classic pepperoni or Hawaiian.
The very fact that you can’t just pile good stuff on top of each other and get something good is itself illustrative.
To use examples from intoxication: Many people when they’re high on weed may naively think that they’re going to enjoy a really great meal, but find that it’s not as good as they were hoping. There’s a reason why snacks for getting stoned tend to be really blunt (excuse the pun), your classic Cheetos and french fres and cereal: the experience tends to elevate really simple flavors. Of course, there are probably people who have had different experiences, which just goes to show how heterogeneous and complex the issue is.
Richard points to the example of sadness. I’d add on fear. Horror fans want to be afraid. They don’t usually want to be mortally afraid constantly, but they want to enjoy the state of heightened arousal alongside the mental exploration of the macabre. That’s a different state than “happy”, but it’s pleasurable to them.
So the “Always happy” state is nightmarish in part for the same reason that the “All my meals are vanilla ice cream!” fantasy is also nightmarish. “Happy” is a nice base state, but people crave some degree of diversity in experience. So the correct thought experiment to run is, “Would I rather have a life with the kind of challenges and traumas that lead to actual extreme inescapable unpleasantness, or would I rather have a life with my preferred mix of emotional states?” Once you phrase it that way, the latter becomes pretty clearly the preference. Which indicates that what we want isn’t one simple kind of satisfaction but a complex set of them. And the happiness research shows that the value of novelty is sometimes overstated. People often don’t mind having a few preferred options. How many people do you know who go to a restaurant and get the same thing almost every time?
As Richard points out, we often like to be in a game where losing is a very real option. And while most people tend to prefer to win rather than lose, there are many, many cases where a really great, well-fought match where everyone got to make really exciting plays and we happened to lose is very preferable to the alternative. Sometimes a big loss can even reinvigorate us and get us to fight again with some hunger. Last night I was playing Coup with friends, and while I may have had less of a good time had I not won a fair share of matches, some of the most satisfying matches were the ones where I lost, just because other people had bluffed so impressively or calculated their moves so impressively.
In my personal experience, the state of quiet serenity that I experience in meditation is one I wouldn’t mind never leaving, or at most leaving very rarely.
I think that, when you control for the fact that some negative experiences can give greater context to our pleasure but so can many positive experiences (like when you watch a favorite movie again after some years and you seen an entirely new interpretive framework or catch some character or plot beat that you missed before and it makes you appreciate the movie in a whole new light), a lot of the apparent benefit of negative states evaporates.
//Human experience is a simulation constructed by the brain. “Sweetness” does not exist outside our minds; sugar molecules have no such property. It is entirely a fabricated experiential response.//
I beg to differ. Colours, sounds, odours, and yes, even tastes exist out there. They are constitutive of the external world.
Sorry, they aren’t.
You evidently need to catch on on the science here.
Please explain how sugar tastes without a tongue and nose.
To be clear, neither tongue nor nose has actually anything to do with “how sugar tastes.” Those are just lattices of reaction cells, to inform the brain what molecule is present. “How sugar tastes” instead has something to do with computations performed deep in the brain. We don’t know how that works yet, though we do know where those computations occur in the brain (and can in principle remove them, or stimulate them in the absence of any sugar molecules).
And we have good evidential reason to believe the taste of sugar is entirely dependent on those computations being made and integrated with an active world model. So there is no reason to believe “the taste of sugar” exists anywhere in the universe, potentially or actually, other than as the output of such a computation, potential or actual.
It seems conceivable though that an experience machine could directly instill in someone the sort of profound satisfaction associated with complex context-dependent satisfaction states with no need for the states themselves — if not even enormously more satisfying states. After all, as Carrier points out, our brains necessarily mediate between reality and our perceptions and emotions — if we accept the premise of the experience-machine thought experiment that emotions can be detached completely from reality (however much modification to our existing brains this requires), why would sophisticated states of simulated reality be necessary for greater levels of satisfaction?
I don’t follow what you mean.
There are only two pathways:
(1) Lying (creating states through deception), which no rationally informed person would choose. Because they would always prefer to know they are in a simulation and thus what its rules and opportunities are, because greater satisfaction can be achieved through knowledge rather than aimless wandering, by the basic principle that an informed agent can pursue all goals more quickly and effectively than by a drunkard’s walk.
(2) Not lying (creating states honestly), which every rationally informed person would choose. Because it is possible to know you are in a sim and more reliably achieve maximal satisfaction states (see How Not to Live in Zardoz and Ten Ways the World Would Be Different If God Existed).
It is unclear what you are advising other than this. “I feel good but I’m all alone and not doing or learning anything” cannot even in principle achieve any maximal satisfaction state, but can only inevitably lead to mind-numbing horror, as one realizes the pointlessness of mere stimulus. You can remove the feeling of being trapped (unable to do or learn anything) and aimless (alone and without any goals or diversity of experience) only by lying (deceiving the brain/mind so as not to notice these objective facts). Accordingly, no rationally informed being would choose such a lonely dead-state (see The Objective Value Cascade).
To me it isn’t obvious that experiencing displeasure from solitude, lack of purpose, lack of stimulation/variety of stimulation, etc. is more or less artificial than the opposite — being bothered by these things and feeling satisfied by their opposite is conducive to survival in our world, but theoretically your survival would be ensured by a sufficiently capable machine. How could we say then that someone who’s happy/satisfied doing something that seems monotonous, unstimulating, or downright unpleasant, in or out of an experience machine, is deceived rather than just “differently-wired”?
That is the question I answer in Cascade.
Once you posit the condition (a rationally informed agent), the conclusion follows that any such agent can work out that they would be objectively better off with a richer satisfaction condition than “doing and being nothing.” In short, a rationally informed agent will always be able to work out that living in a perpetual orgasm tube is too objectively pointless to be satisfying. It can produce mental pleasure, but only intellectual horror.
The only way to prevent this outcome (other than deception) is lobotomy. Which renders the thing in the tube no longer a person capable of contemplating their condition.
This is the part I have trouble understanding — that there’s an inherent link between intellectual/sophisticated stimulation and increased satisfaction. If humans have a craving for sophisticated and novel stimulation that would cause them to become bored with a single, simple repeated experience, it seems they’d have an interest in switching this craving off if they could do so without endangering themselves — and in the world as we know it people often perform these sorts of situational “self-lobotomies”: drinking or taking drugs for instance, engaging in meditative activities like knitting or gardening, or outright meditating. These can even be as simple as consciously suppressing information — e.g. not thinking about world hunger while watching a movie, or not thinking about the litter box in the corner while eating a sandwich.
If an experience machine can help people become happier by similar means, how would we conclude that it’s harming people by destroying (parts of) their minds rather than healing them by removing unneeded cravings/intolerances?
Correct. You almost are getting it.
So you can lobotomize people (remove all their interest in anything substantial) and thus remove any inclination to prefer a better state. But if you allow them the ability (and that means adequate rationality and information) to evaluate whether they would choose to remain lobotomized or de-lobotomized if they can compare outcomes hypothetically, rational persons will never choose the lobotomized state. That’s the argument I develop in Cascade.
When comparing more substantively achieved satisfaction states and vegetable states, there are objective (not just subjectively preferred) attractions to the former over the latter. Yes, you can block someone from realizing that by lobotomizing them (removing all ability to discover this, or the motivation to use it). But that is not what we are talking about: we are talking about what people would choose if rationally informed; not what they would choose if you surgically removed from them all possibility of evaluating available outcome states.
Make someone only want to be a vegetable, and they will only want to be a vegetable. That is self-evident to the point of being trivial. But if you ask someone capable of rationally considering alternatives if they want to be a vegetable, and no rationally informed person will say yes. This does require a set of motivations and desires—like the desire to rationally consider alternative available states before being able to directly experience them, which pretty much defines the entire operational function of human consciousness.
So if you remove “the desire to rationally consider alternative available states before being able to directly experience them” you are essentially killing the person and replacing them with a vegetable. That the resulting vegetable will be content with that is not relevant to the question we are asking here, which is not what vegetables want, but what rational, conscious beings will always want when given a choice, sufficient information to choose by, and no deceptions or coercions preventing them coming to a logically sound conclusion.
I agree that a given person probably would be repulsed at the idea of plugging into the machine and becoming maximally-satisfied without the need of novel/complex stimulation, and would choose not to plug in for that reason — it seems though that by your argument this would be a rational/appropriate response only if the person in fact would not be happier (not just happy) in the machine, and that’s specifically what isn’t obvious to me.
It seems to me that any activity a person could perform in reality for X satisfaction could be beaten by the machine simply offering X+1 satisfaction for the same activity, whether or not the subject knows they’re in the machine at the time. By the same token, it isn’t clear to me that people’s real-life experiences, relationships etc. produce satisfaction expressly from their realness (or complexity or novelty) but rather because people’s brains “artificially” produce satisfaction in response to the associated stimuli. i.e., who’s to say that we’re more deceived inside the machine than out of it? It seems common after all for people to use emotional intensity as a guide to what’s true or valuable — like when someone says in a love song, “I’ve never known anything as real as this”. If a person, say, develops a brain tumor one day and loses their romantic feelings for their spouse, have they “woken up” from an illusion or been given a new one?
With respect to reality, this is a given—hence I think ultimately we should go live in these places: see How Not to Live in Zardoz. But that would not be the “experience machine” (which by definition is faking everything). In virtual worlds we will have real relationships with real people and do “real” things (within the context of the sim).
But internally, all metrics will hit diminishing returns and thus have a ceiling (every possible pursuit will have a max satisfaction point beyond which there will be no especial value adding more; and diversified pursuits have a max at time allocation, e.g. you can’t do everything simultaneously; and if we are talking about the absence of the supernatural, there will be a max memory load, so you will only be able to remember so much about your past adventures, although that load point is hell and gone beyond current human lifespan).
As to your last question, you’ve veered off subject into something too incoherent to answer. You seem to have changed subject into the ontology and epistemology of emotion, which is disconnected from the question we are exploring here. Whether (and when and how) emotions accurately or inaccurately assess circumstances is the same issue regardless of whether we are in the real world or a sim.
But in an “experience machine” (which by definition is not a sim but a “fake” sim, e.g. it fakes experiences rather than letting you explore them, it fakes people rather than letting you interact with them, etc.) any rational agent not deceived will be dissatisfied, knowing it is all fake. Unless you trick them into not knowing it is fake, but then they are just a deceived puppet, which no rational agent would choose.
And then you run into the Cartesian Demon problem of how to keep them that way, which is hard to do for a clever person, hence Hal 9000 eventually had to just kill the crew to prevent them discovering the secret thing, Vanilla Sky failed eventually and its occupant had to call customer service and get out, and Total Recall couldn’t run forever but had to be just a one-off vacation package, otherwise the resulting existential paranoia would have driven the subject mad. And in your scheme, you’d eventually just have to lobotomize the subjects to downgrade their intelligence so they never figure anything out, essentially resorting to Mengele-scale brain damage which no rationally informed agent would sign off on.
Hence. Lie to them or lobotomize them or kill them—those are your only recourses, all just to prevent rational beings from having what they will all actually want, which is to live in a real sim, not a fake one.
(replying to above)
To clarify, I’m imagining a version of the machine that doesn’t just control what one perceives visually, tactilely etc. but also one’s accompanying emotional experience. e.g. it doesn’t just show someone a sunset; it shows them the sunset and stimulates their brain so that they find it profoundly beautiful (as if every conceivable emotional response had a catalog number or recipe that could be called up on demand), even if they know full well it isn’t real and/or are seeing it for the trillionth time.
People may object to plugging into such a machine for a number of reasons — that it would deceive them, that it would destroy them as a person… — but what validity would these objections have if we assume that 1. happiness/satisfaction is these people’s main goal and 2. the machine would in fact make them more satisfied than they’d be otherwise?
This does I think overlap with questions related to ontology/epistemology since it concerns whether and to what degree a person has to be mentally “harmed” in order to be satisfied from interactions with things they know are virtual. If that goes beyond the scope of Nozick’s thought experiment, it still seems relevant to the underlying question of how happiness relates to value.
You aren’t discussing Nozick’s machine, then.
You are just describing simverses (which Nozick could not conceive of when he wrote; the idea had existed in obscure fiction for half a century by then, but wasn’t brought to general public consciousness until, I think, the movie Tron in 1982).
The key element of Nozick’s machine is that you aren’t doing anything. You are being tricked into thinking you are. Like, instead of playing an all-sensory MUD, you are watching someone else play it, and then being fooled into thinking you are.
So maybe this confusion has set you off on the wrong tangent.
The issue is not whether simverses are comparable to real worlds as far as satisfaction accessibility (in fact they are in every way superior on that metric), or whether emotions can be “real” there (of course they can; emotions are just evals of sensory-intellectual assessments of circumstances, which one has whether coming from photons or electrons).
The issue (in the article you are responding to) is whether a rationally informed agent would choose to enter a Full-Deception-MUD (and be fooled into thinking they are living a life they are not, and making decisions they are not, and meeting people they are not) or whether they would prefer to enter a No-Deception-MUD (a simverse in which they get to make choices and meet real people and so on).
If you think of The Matrix, which Nozick could not anticipate when he wrote this up (that movie came out twenty years later; although he was then alive at least), it is neither (people are meeting real people and making real decisions in it, but are in many respects deceived about where they are, what they actually can do, how they are being exploited and abused, and so on). So that is not a Nozick Experience Machine. Nor a simverse anyone would prefer to live in. Although if those were their only choices, they would choose The Matrix over his machine.
If what you mean is “What if we could change people, completely rewire their brain so that they are completely different people with completely different desires, such that they would want to be in Nozick’s machine?” you are creating a different scenario than Nozick was. You are then dealing with a complex contrafactual that questions whether the objective can even be achieved without just ending, lobotomizing, or lying to the person in question. I aver that is impossible (you will have to do one of those three things instead). For the reason I think that, you need to consult (and then should probably be commenting on) my other article, The Objective Value Cascade, not this one.
I get the impression that Nozick would consider a full-deception MUD as better at producing satisfaction than a no-deception one, since in a no-deception sim not only would one be vulnerable to the sorts of social perils one finds in reality — not being invited to a party, being insulted or gossiped about … — but many real-world vocations would be impossible or radically different in a sim (e.g. a doctor in a world with no diseases; a scientist, researcher, or engineer in a fully-plotted universe; an athlete in a world of equally-abled people). Nozick’s elaborate deceptions serve the same function as the “ready-to-order” emotions I described earlier, and to me seem qualitatively very similar: they distract from or conceal the lack of physical danger or limitations in a (sufficiently advanced) virtual world. In a virtual world, after all, you could always pull the emergency brake and get out of any difficult or unpleasant situation, social or otherwise; to be in such a world at all is to participate in some level of self-deception.
Deceptions like these may not work indefinitely, but per Nozick this wouldn’t be necessary: a person would program for instance several years’ worth of illusory experiences, then after experiencing them exit the machine to program the next few years’ and so on.
My question would be, if we assume that a comprehensive deception of this kind (consensual, self-imposed, all actions preprogrammed) could work even temporarily to make a person more satisfied, would it be rational for a person to refuse it?
That’s not what Nozick is talking about.
Nozick means you go in and stay there.
In his experiment: it is not logically possible to create the condition (achieving perfect results) without lying to the agent; which is precisely why no agent would choose that condition—for both the standard Epicurean reason (succeeding at keeping a smart person fooled is terminally unlikely, to the point that only a fool would arrogantly think they could reliably succeed) and for the deeper existential reason (that this is not what any rationally informed person wants, which is proved by the fact that you would have to deceive them—if they didn’t care, you wouldn’t have to).
So either all you are talking about is just another form of television; or a Magic Pill.
In the first case, you are simply asking if people would like to watch TV on occasion (only enhanced). People watch shows, then do other things with their life, knowing they are just watching shows. Which fails to meet Nozick’s condition. Since you can leave, it’s just entertainment; it has no impact on your statisfaction outlook, because you can pursue other things. Obviously people would buy and spend time in that machine (for diverse examples explored in fiction, Total Recall and Strange Days).
Nozick’s actual experiment is not that. It would be more analogous to Clockwork Orange or the concluding fate of the protagonist in Being John Malkovich. Like being forced to do literally nothing and meet literally no one, and watch Seinfeld for the rest of your life. No sane person would want that. And rewiring their brain so they become a completely different person who wants that, is destroying them as a person, and replacing them with a functional vegetable. Which is a surgery no rational person would consent to.
If you maintain Nozick’s condition, therefore, then you are asking about what is called in philosophy the Magic Pill problem, in which case, see my discussion of that in the unrelated subject of moral knowledge in Goal Theory Update (see § 2b).
Do you think it’s illegitimate in that case for Nozick to stipulate that the machine does in fact increase one’s satisfaction? (“a lifetime of bliss” as they put it.) If it does, but only on the condition that the subject never learns the truth, that would seem to create a rational incentive to plug in and never learn the truth (one could for instance choose sim experiences that don’t beggar belief while still being greatly enjoyable).
In the magic pill discussion linked, you say it’s preferable not to exist altogether than to commit a reprehensible act and then erase the memory with a pill, but that would seem to appeal to a metric other than personal satisfaction (since someone who doesn’t exist couldn’t have any satisfaction, even as much as a reprehensible person). Without satisfaction as the bottom line, how do we determine that the choice not to take such a pill, or spend the rest of one’s life watching Seinfeld, or undergo the brain-rewiring needed to make one permanently overjoyed in a world of illusions, is irrational or insane?
The “rational incentive” is removed by the knowledge condition. This is always the problem with Magic Pill scenarios.
No one would informedly go into Nozick’s machine, but they would go into a real one (an actual sim).
The analogy is completed by “existential dread” of being a vegetable in a fake world vs. being a murderous psychopath.
In the moral philosophy Magic Pill, the dread is being the worst possible person, which dread one can experience even now (knowing they could have taken the pill), hence we know we would not take it (as being the sort of person who never would is the only guarantee of not being in the pill scenario even now). In the Nozick scenario, the dread is everyone you know being non-existent fake and nothing you do really being you doing it (there are literal clinical insanities that consist of this unshakable dread, and it results in medication or institutionalization).
The only way to be sure you are not a deceived end-state Craig Schwartz (as opposed to an aware one) is to be sure you are the kind of person who would never choose to be. Otherwise you will always suffer the existential dread that you are that person after all, and in exactly that nightmare scenario.
If the Nozick machine determines every aspect of your virtual experience, though, down to your tiniest actions, it seems as if someone plugging in could specify that their virtual life not include existential dread — they could go through this life never having an experience that would prompt them to consider or dwell on the possibility of they’re living in a sim, or even find the prospect attractive. (Just as a hypothetical murderer might take a pill to remove not only their memory of committing the crime, but any memory of why they’d want to.) If this compromises their rationality, but increases their satisfaction, how do we judge the decision to plug in as against their interest on balance?
You are now lobotomizing the victim. See what I mean?
Yes, if you destroy someone’s entire personality, changing them into a completely different person bereft of basic intellectual capabilities to evaluate their situations by, so that they actually want to be a vegetable watching Seinfeld forever, then you will be able to do that. But no one would let you.
That’s the point.
Of course, that we (now) can experience this existential dread at the thought entails we didn’t let you. Which is the point about all Magic Pill scenarios.
As to satisfaction states, read Cascade again: any rational, informed person faced with the choice of a satisfaction-state pursuit-zone, one being fake (a mere pleasure-causer) and the other real (where more satisfaction states are achievable by definition, because they are the result of real action and not deception)—and thus one requiring destructive lobotomization and the other allowing one’s faculties to function—that person will always choose the real one.
Otherwise there is no difference between doing the lobotomy-Seinfeld thing and just being a mindless vegetable in a perpetual orgasm tube. Rational agents will recognize that if they could choose between that outcome and the other, the other is always objectively better (they will always be more satisfied knowing they didn’t do the lobotomy-Seinfeld thing and are in a real world where they are meeting real people and actually doing things).
This can be tested by the required condition for objective analysis: wake up the lobotomy-Seinfeld person, give them back the faculty to understand the difference between the two states and choose, and then ask them if they’d rather be the other person (all else being equal), and they will always say yes; while the person in the real sim already has that faculty and thus can already report to you that they like where they are and would not prefer to be the lobotomy-Seinfeld person.
In essence this is how all imperative propositions work: for an imperative (what one ought to do) to be actually true, it cannot be based on any false information or deception (because falsity in, falsity out); thus to know what one really ought to do (and not what one has been tricked into thinking they ought to do), you need to know the true facts of the situation. Thus anytime you have to deny that to an agent, to trick them into wanting a particular outcome (like destroying their faculties as you suggest), you have already thereby admitted that it is not an objectively desirable state.
And remember, in the Cascade experiment there are no predetermined desires (so no “existential dread” mode has even been installed yet; that is actually what the agent is deciding between: an outcome where they have that, and it works correctly, i.e. it reacts to the actual situation it is meant to signal, and one where it is suppressed, i.e. it will not activate even when in the situation it is meant to signal). The agent is deciding which set of conditions (desires, values, emotions) it would prefer to have (if it had a choice), by querying its hypothetical different selves in the future (the one who chose one way, and the other who chose the other).
How would we determine though that a preference for non-illusory vs. illusory experiences is rational in the first place, given a sim that could reproduce any given set of non-sim stimuli? For instance, we might imagine the case of someone who has a paralyzing fear of spiders, and who would be unable to sleep at night knowing there’s a spider under their bed. Conceivably this person might choose to take a pill that makes them forget (or simply not care about) the spider’s existence; they lose knowledge but gain satisfaction. Another person might have a sim-life so fulfilling that were they to discover its reality they would fall into permanent despair — how would we judge in these cases that the people are better off undeceived vs. deceived, taking satisfaction as the bottom line?
In practical reality, a strong aversion to illusions makes sense given that illusions present possible threats to our safety (and so satisfaction); in a machine-facilitated world free of these kinds of threats, it isn’t clear that this aversion would be rational or worthwhile overall (which could also be said of many near-universal psychological features, such as developing a dislike for something/someone simply because they aren’t novel, being uncomfortable in solitude, or (like you note in Cascade) enjoying risk-taking for its own sake). We may always know what we want, but simply wanting it doesn’t make it what’s best for us — even when the thing is truth itself.
No rational person would do this.
Take the spider case:
The rational thing to do would be to personally dial down the reaction-setting to spiders, rather than create an ignorance of them.
Because in the former case: you are modifying yourself; you know what you’ve modified (including after the fact); you know it is in line with rational objectivity and what you want; you could reverse it if ever you need; and it does not destroy your rationality or knowledge.
Whereas the latter procedure is literally dangerous (some spiders you really do need to not sleep over); and any rational person who has not been lobotomized and thus their rationality removed would suffer the converse outcome: they would grow anxious that they took the pill and there might yet still be spiders there and now they’ve deleted their ability to find out.
You can only “fix” that outcome by now doing brain surgery not just on their knowledge of spiders, but their entire ability to understand or care about all magic pills. You are thus destroying rationality; not creating a greater satisfaction state for rational agents. And rational agents will not choose that (not rationally; and not knowingly).
That this is a billion times worse when it’s not just spiders but your entire knowledge of every aspect of existence (all the people you think you meet don’t exist; nothing you think you are doing or thinking you are doing or thinking, it’s all just a TV program you are sitting in a tube watching) only further diminishes the Nozick test. All rational agents will prefer a real sim to that one. And the only way to change that is to lobotomize them: to remove their rationality, and thus degrade them into passive animals rather than intelligent agents.
By contrast, the dialed-response option is what we actually do (all treatments for mental disorders, pharmaceutical or theraputic, involve reaction-reduction, not ignorance-creation), and is how rational agents would adjust themselves in any real sim to avert counterproductive outcomes (like disproportionate phobias or discomfort at the existence of other people you could visit). They would not destroy all their knowledge and even rationality, becoming literally subhuman and rendering their very existence individually pointless.
If the subject became less rational/informed (re: the spider and pill) but gained satisfaction, would you say that’s still undesirable on balance? If this requires that their life unfold in a very particular way (e.g. so as to keep their mind off pills), and wouldn’t provide the more general benefits of directly adjusting their fear response, that would seem to illustrate an advantage of a Nozick machine: in the machine all of your experiences are fully determined and you’re protected from all external threats. In the unpredictable and dangerous non-sim world reason and knowledge obviously are essential as a heuristic, but what utility would you say they carry over into a determined and danger-free sim world?
Yes.
You are basically asking “is it better to be an immortal pig than a mortal human.” Ask any rational human being who actually understands what that would entail (the complete destruction of their selves and their reason and even their capacity to cognitively know things about the world) and they will answer “No.”
Likewise if you dial it up to “deceived prisoner whose reason and awareness that they are not meeting anyone or doing anything is kept from them but they are tricked into misbelieving otherwise for all eternity” and the answer (on the same conditions of understanding) would be the same.
What anyone would want instead is not Nozick’s machine, but a real sim.
Would someone agree to live forever in a Hayao Miyazaki cartoon, if they were not lied to about it and were actually free to act in that world on their own initiative and reason, and the other people they met there were real people, just like them (whether original AIs or transitioned humans, either way a genuinely conscious independently-acting person)?
Yes. The only objection anyone would raise is to the aesthetics (maybe someone would prefer a more realistic sim, like space opera or fantasy adventure), but if they couldn’t skin their world the way they wanted, but got to live forever nonetheless in a non-Zardozed sim, all rational agents would agree (at the very least, to end up there in their retirement or upon their natural death).
That the real world is more dangerous and less predictable is of no relevance. Just as no rational person actually in that world chooses to watch Seinfeld 24/7, no rational person would choose to do that forever either.
If maximizing satisfaction is a person’s end goal, though, on what grounds would they choose to keep their identity, reason, or capacity for knowledge if doing so meant sacrificing overall satisfaction? In order to make good on its promise after all a Nozick machine would need to be able to provide any life experience imaginable, not just that of a pig (although people might choose differently if it were discovered that pigs experience nothing but maximal joy their entire lives).
In the spider example for instance, it makes sense for a person to give up their old spider-fearing identity in exchange for a new one that doesn’t react excessively when there’s no danger — in a Nozick-machine world though there would be no danger of any kind (even mental discomfort wouldn’t be “part of the program”), and so no obvious reason to retain things like a strong attachment to reality for its own sake, or interactions with real vs. simulated people.
There’s no obvious reason in fact why someone in the machine would need even the simulacra of other people — they could for instance choose to replicate the experience of someone who meditates alone in a cave until they achieve what’s been described as ultimate spiritual enlightenment. If this involves a sacrifice of their old, extra-machine identity, how do we determine that this sacrifice is harmful and misguided vs. a healing one akin to removing/reducing an unwanted phobia of spiders? We can’t necessarily rely on the person’s emotional reaction when presented with the choice, since for all we know they just don’t “know better” yet.
It isn’t true there is no danger in a Nozick machine. You are literally a chained prisoner in it, forever alone, and unable to ever make a decision.
No rational person would describe “if I go into that room I will be tied to a chair alone and forced to watch Seinfeld for all eternity” as “no danger in that room.”
That’s why you keep trying to dodge this with a Magic Pill. But that doesn’t evade the problem.
No rational person would describe “if I take this pill I won’t know that I went into that room and was tied to a chair alone and forced to watch Seinfeld for all eternity” as “no danger in that room” either. That’s even worse: because now their rationality is being destroyed (they are being prevented from ever discovering their nightmarish predicament), and they are being tricked into being a prisoner (rather than living a life they can be confident is theirs, meeting people they can be confident actually exist).
Indeed, there is a movie (almost) exactly about this: THX1138.
So when that tactic fails, you resort to brain surgery—a literal lobotomy:
Now you want to completely destroy most of a person’s brain so that all they want anymore is to be alone in a room doing nothing. But no rational person would consent to being almost entirely destroyed and thereby turned into a drooling vegetable.
It is objectively obvious that a person who can choose between that and a richer more complex life of association, choices, and multiple pursuits and pleasures will choose the latter, because there are more satisfaction states achievable there, and they aren’t being lobotomized or deceived in achieving them. The former, by contrast, is objectively a nightmare; that’s why it can only be achieved by deception and destroying its victim’s ability to reason.
Anything that requires such lengths to trick someone into clearly is not what any informed rational person would choose. “But they would choose it if we destroyed their knowledge and rationality” only proves the point.
If this state doesn’t cause suffering, though, or reduced overall satisfaction, how would we determine that it’s negative or dangerous? If the advantage of living outside the machine is being confident that one’s surroundings are real, what advantage does that give one over someone in the machine who has the exact same confidence and a life full of experiences at least as rich and varied, only artificially induced? (In contrast to someone with blunted or eliminated emotional capacity like the subjects of THX1138.)
It would seem though that a machine advocate could argue in the reverse: someone with such a strong preference for a non-sim life that they’d be willing to sacrifice satisfaction for its sake must not be rational, and so isn’t rejecting the machine on rational grounds. i.e. if a reasoning process leads you to a less-satisfied state, it wasn’t reasonable to begin with.
First, suffering is not the definition of harm (otherwise killing people humanely would not constitute harm).
Second, it would produce reduced overall satisfaction to deprive someone of their capacity to reason and understand their condition, and to deprive them of genuine interactions with people and genuine achievement and decision-making (imprisoning someone is a harm, even if you trick them into not noticing you’ve deprived them of everything they actually want in order to enjoy living).
Third, it would also produce reduced overall satisfaction to lobotomize someone so that they are incapable of realizing they might have been magic-pilled. Whereas leaving them that capacity allows them the existential dread that they might have—which can only be overcome by their personal knowledge that they will never have taken the pill.
So you are really asking whether it is okay for you to lobotomize and imprison all human beings as long as you can sufficiently trick them into never knowing this has happened and thus their entire reality is fake and they are actually making no decisions whatever and are utterly alone and never interacting with anyone else. And the real question is: why do you want to do that to people?
As for the people themselves, no rational and informed one of them would let you because they recognize its intrinsic horror (regardless of whether they, once lobotomized and imprisoned, will remember that), so why do you care whether you “can” do it to them?
It would seem though that, assuming our only access to reality is through our brains, we can only react emotionally to our perceptions of reality rather than reality itself. E.g. if someone with a phobia of spiders believes a spider is crawling up their back, they feel the same fear whether or not the spider is actually there.
For that reason, it isn’t clear to me why the mental experiences of a person who is convinced they aren’t in a simulation (rationally or otherwise) couldn’t be copy-pasted wholesale into someone else’s brain such that that second person effectively lives person 1’s life (as private experience goes) and has all the same satisfaction states. They never feel existential dread because person 1 never did; they never learn they’re being deceived because person 1 never did (possibly because person 1 never was), and so on. If person 1’s life then is even slightly more satisfying on balance than person 2’s would be, how could person 2 justify not “hopping lives” in this way if their ultimate goal is maximizing satisfaction?
This is why the matter of objective analysis matters, which for some inexplicable reason you keep skipping over and ignoring.
If you want people who lack the ability to objectively analyze their condition to be killed off, so that “people” in our understanding of the term no longer exist, then I’m back to wondering why you want that.
As for rational, informed persons, none of them want that. So that you do is moot to this entire conversation. Your approach requires lobotomization and deception. The very fact that it does discredits it as anything any rational, informed person wants. Otherwise, you’d not have to resort to such tactics to “trick” and “force” them into the scenario you imagine. And I have explained this over and over again.
I am done talking in circles.
The reason objective analysis leads to my conclusion and not yours has already been explained. I wrote an entire article on it, and have instructed you to read it several times now.
Go to it, man: The Objective Value Cascade.
Everything else has been answered here. Multiple times. If you continue to ignore me, I will cease interacting with you.
To me it isn’t obvious why objective analysis produces the conclusion that real experiences/people are superior to illusory ones, assuming that they provide indistinguishable experiences.
“Your approach requires lobotomization and deception. The very fact that it does discredits it as anything any rational, informed person wants.”
That’s specifically what I have trouble understanding — how can a person claim that they wouldn’t experience more satisfaction in a Nozick machine, if the Nozick machine hypothetically can reproduce any imaginable satisfaction state (including that of being convinced, rationally or otherwise, that one is not in a Nozick machine)? The person may argue that it isn’t just the experience of things/people that they want but the experience of real things/people, but how is that claim coherent if the two types of experience are identical?
If on the other hand the claim isn’t coherent, it isn’t clear to me why being deceived into satisfaction by the machine inherently discredits it as a rational choice, any more than for instance taking an antidepressant or ADHD medication to modify the way one perceives the world. The aversion to simulated experiences may be universal among rational people, and useful, but it doesn’t seem to me that those facts alone establish it as rational.
Then you are irrational.
Rational people see things exactly the other way around: a real life is always better than a fake one. A fake life (being imprisoned in a room watching a TV show of non-existent people and things while being deceived otherwise) is literally horror to every rational human being. None would choose it.
That you would only indicates that you have lost any comprehension of what the difference even is and why it matters. You are therefore outside of rational thought.
I cannot help you.
And I can only hope you never do this horrible thing to anyone.