A few years ago, Sam Harris put on a contest, that awarded $2000 to the best essay critiquing his “moral landscape” theory of moral facts—and could have awarded them $20,000 had it convinced him. It didn’t. I agree it shouldn’t have. But he should have learned something from that critique. And he didn’t learn anything. And here, I’ll explain what I mean by that.

Nevertheless, someone won the two grand. And many of you who watched and discussed the contest announcement, might not have kept up with what resulted. And many might not have even known this contest happened! I think this is the kind of contest that can be extremely useful to progress in philosophy—if I were a multi-millionnaire I’d likely set up a whole institute devoted to a more rigorous application of the same contest procedure broadly across the whole spectrum of key philosophical debates. Just on a better model than Harris ended up deploying.

Here I’ll summarize the debate, what happened, and examine the contest winner’s case and Harris’s response to it. But first a little background to get you oriented…

The Backstory

I think Harris is correct in his thesis—moral facts are empirically discoverable facts and thus a proper object of scientific research (just, no one has ever done that research yet, in the sense required—so this science looks more like psychology did in the 19th century right now); and moral facts may indeed be describable as peaks on a moral landscape (I’ll explain that in a moment). The latter proposal is actually the less controversial of the two (albeit still “shocking”), and people usually ignore it to attack instead the first proposal. For how dare he say morality is a scientific question and scientists can tell us what’s morally right and morally wrong! They can’t, BTW. Any more than they could have told you how your brain works in 1830. Because the science actually hasn’t been done yet. But if it were, then yes, then scientists may, one day, be able to tell you what’s right and wrong, and they will have as much factual warrant to say that as they now have to say the earth is round and billions of years old.

The key piece missing right now is the normative values side of the equation. Most people who are at all informed know that science certainly does answer the question, “What are the actual consequences of doing A rather than not-A?” And that science is always the best means to answer that question (even if it hasn’t been tapped for that purpose yet in a given case). Where they founder is on the notion that science can answer the question, “What consequences should we value?” In other words, what consequences should we be preferring over others, such that the moral thing to do is to prefer those consequences? Harris (and other defenders of the same thesis, like Michael Shermer) have traditionally done a really poor job of answering this criticism. I suspect because they hold philosophy in contempt, and that serves as a barrier to their learning how to do it well, and then engage informedly with actual philosophers on this issue (see my discussion of the Shermer-Pigliucci debate as an example of what I mean; on the problem of this contempt for philosophy in general, see Is Philosophy Stupid?).

But that failure to articulate well the correct response to this is what the contest winner’s essay also reveals, once again. So it’s clearly the Achilles’ heel of Harris’s program to convince people of his thesis.

The second proposal, the “landscape” theory, is actually the more interesting, and where I think Harris contributed a new and valuable feature. (I had already defended the first proposal long before he did—it’s the central feature of my book Sense and Goodness without God, published in 2005, and the thesis of my peer reviewed chapter on the subject in The End of Christianity, published in 2011, soon to be back in stock at Amazon). His landscape notion is that value systems are interacting systems, and as such there may be multiple value systems that are equally good, but mutually incompatible, because of the coherence and effectiveness of their internal interaction, but individual pieces of one value system will only be good when placed in the correct system—move them over to the other system, and their interaction will cause problems. And science might well find that there are several “peak moral systems” on a “landscape” of moral systems of varying quality, and any one of those peak systems will do, as long as you stick with one whole coherent system and not try to mix and match.

A mundane example of the same principle is traffic law: there is no fact of the matter whether driving on the right (as in U.S.) or the left (as in the U.K.) is better; but each system only functions when it is internally consistent. So everyone does need to drive on the right in the U.S., and everyone does need to drive on the left in the U.K., for the system to work and maximize traffic safety and efficiency. And only systems that have one or the other are maximally safe and efficient. So there is a fact of the matter (indeed, a scientific, empirical, objective fact of the matter) that “you ought to pick a left-driving or a right-driving rule and stick with it within the same traffic system, if you want to maximize traffic safety and efficiency.” So here we have two peaks in a landscape of traffic systems, both are equally fine, but you do have to pick one. Incidentally, here also we have an ought that reduces to an is.

For more background on this, and where my moral philosophy fits in or contributes, see my original discussion of the Harris contest in What Exactly Is Objective Moral Truth? and my follow-up before the contest found a winner, in The Moral Truth Debate: Babinski & Shook. In the latter I summed up the situation again:

All moral arguments reduce to appeals to either (or both) of two claims to fact: the actual consequences of the available choices (not what we mistakenly think they will be), and which of those consequences the agent would most actually prefer in the long run (not what they mistakenly think they would). Both are objective, empirical facts.

[And as such] … all talk about moral truth simply is talk about what people really want and whether their actions really will produce it.

… [And that means] …

Moral facts follow from actual values—which does not mean the values you think you have or just happen to have, but the values you would have if you derived your values non-fallaciously from true facts (and not false beliefs) about yourself and the world. Hence, your actual values (what you really would value if you were right about everything).

In the most reductive sense, all moral propositions, of the form “you ought to do A,” are predictive hypotheses about how your brain would function in an ideal condition. If your brain was put in a state wherein it made no logical error (no step of reasoning in its computing of what to believe, value, or do was fallacious) and had all requisite information (no relevantly false beliefs, and all relevant true beliefs), then it would in fact do A. And in telling someone they ought to do A, we are really saying they are acting illogically or ignorantly if they don’t; that even they themselves would recommend they do A, if they were more informed and logical.

Some moral theories frame this in the way of an “ideal agent.” But always the “ideal agent” is just an ideal version of yourself. Someone might yet claim we should not be rational and informed (and “therefore” the ideal agent we should emulate is not a fully rationally informed one), but it’s easy to show at the meta-level that there is no relevant sense in which such a statement can be true (see TEC, pp. 426-27, n. 36). It shouldn’t be hard to see how this actually makes all moral statements scientific statements. Even at the level of choosing not just actions, but values. An “ideal you” would choose a set of values that might differ from the values you have now, and that is a statement of fact about how a machine (your brain) will operate in a given factual condition (being rational and possessed of true beliefs). That’s an empirical statement. And therefore open to scientific inquiry.

The difference between you two (the current you, and the ideal you) and the values you each choose to prioritize, would then only be that the non-ideal you chose different values because you are more ignorant or illogical (than the version of you that is neither). And once you realize that, there remains no coherent reason not to change your values to match theirs (meaning, the values that would be adopted by the most rational and informed version of you). Only, again, an irrational or ignorant reason could prevent you from thus revising your values. So moral facts are just statements about what a non-irrational, non-ignorant version of you will do. Harris has never really explored or articulated this well at all. He should.

Now to the winning essay and Harris’s answer to it…

The Setup

You can get more details, and read the winning essay, at The Moral Landscape Challenge: The Winning Essay. Philosopher Russell Blackford (the contest’s judge) concluded that the most common and important objection raised in the 400 or so entrants to the contest was that:

[T]he primary value [in Harris’s proposed system], that of “the well-being of conscious creatures,” is not a scientific finding. Nor is it a value that science inevitably presupposes (as it arguably must presuppose certain standards of evidence and logic). Instead, this value must come from elsewhere and can be defended only through conceptual analysis, other forms of philosophical reasoning, or appeals to our intuitions.

And that is what the contest winner argued, and by Blackford’s judgment, argued better than any other entrant. That entry was produced by Ryan Born, who has degrees in cognitive science and philosophy and teaches the subject at Georgia State University.

His only error is in thinking that “philosophical reasoning [and] appeals to our intuitions” are categorically different than empirical science, when in fact they are just shittier versions of empirical science (our intuitions are only attempting to guess at facts through subconscious computations from evidence, and philosophy is always just science with less data: see my explication of that point in Is Philosophy Stupid?). The more you improve the reliability of those methods (intuition or philosophy), the more you end up doing science. Maximize their reliability, and what you have, is in fact science.

But then, how you get to Harris’s conclusion (that “the well-being of conscious creatures” is the outcome measure distinguishing true moral systems from false) is not obvious. And it’s made worse by the fact that that’s a really confusing and imprecise statement to be of use scientifically. What kind of well-being? With respect to what? Which conscious creatures? How conscious? Etc. It’s also not reductive enough. The real outcome measure that distinguishes true moral systems is the one that determines whether anyone will, when fully rational and informed, obey that moral system. If a fully rational and correctly informed agent would not obey that moral system, then there is no relevant sense in which that moral system is “true.”

Because of this, you should be looking instead for “satisfaction-state” measures—which option (e.g. by choosing which value system, which in turn produces which behavior) maximizes the agent’s satisfaction (with yourself and with life; and in duration and degree; which inevitably means in terms of risk reduction, e.g. since no actions have guaranteed outcomes, always the question is, what options most decrease the probability of dissatisfying outcomes, a fact philosophers all too often overlook). Then you may find that “improving or maintaining the well-being of conscious creatures” does that (that an ideal agent pursuing satisfaction maximization will agree “improving or maintaining the well-being of [all] conscious creatures” is a lot or all of what really, in actual fact, does that: maximizes the agent’s own satisfaction as well).

But you may find it’s not quite that, but something a bit different but that overlaps that. Since the only way to get a moral statement to be true is to get a moral statement that an ideal agent will obey, you can’t start right out of the gate by assuming you know what that will be. As Harris just “assumes” it will be “the well-being of conscious creatures,” but actually, science hasn’t empirically determined that yet. Science may find that a rationally informed agent would pursue something else—a goal that may include “the well-being of conscious creatures” in some ways, but won’t be literally identical with it. Not realizing this is where Harris goes wrong. It’s not that he’s wrong in his core thesis (that science can determine what morals an ideal agent would obey). It’s that he keeps skipping steps and assuming science has already answered certain questions, when it hasn’t—it hasn’t even tried yet.

We need Harris and other advocates of this notion to start articulating an actual scientific research program. We need to know what value system an ideal agent would choose. To find out, we can’t really create an ideal agent and see what it computes. But lots of things in science are understood without being viewed directly (we can’t see atoms, black holes, other people’s thoughts, etc.). The way to go about it is to start removing the things that de-idealize an agent and see what happens. What happens when you get an agent to reason out a system of values without a fallacy (e.g. with fewer and fewer fallacies, by detecting and purging them) and with only true beliefs, and with all relevant and accessible true beliefs (e.g. with fewer and fewer false beliefs, undefended assumptions, gaps in available knowledge, etc.)? You might not get perfect knowledge, but you will start to learn things about which values are falsely clung to (values that can only be justified by fallacious reasoning, false beliefs, or ignorance), and thus start trimming them down to the values that would most probably survive any further purging of bad data and logic.

I predict the result will be something closer to an interactive hierarchy of reasonableness, compassion, and honesty. The effect of adhering to those values will be, in most cases, an improving or maintaining of “the well-being of conscious creatures,” but it won’t be identical with it, and indeed I suspect it will turn out that an ideal agent will sometimes correctly act against that outcome. But that, and where the boundaries are, is an empirical matter. We can argue over it, case by case, in a proto-scientific way with the data we have so far. But we would still need to turn the full engines of science on it to have a scientific resolution of any such argument. And so far, no one is doing that. Or even trying to work out how we would. Not even Harris.

The Critique Worth Two Thousand Dollars

With all that understood, you will have a better perspective on the context of the main points in Born’s critique (at The Moral Landscape Challenge: The Winning Essay). I believe the most relevant points are as follows:

  • Born: Harris’s “proposed science of morality…cannot derive moral judgments solely from scientific descriptions of the world.”

This statement is correct for Harris. Harris has made no argument capable of meeting this objection. And Born does a good job of showing that. But we can meet the objection. Born is incorrect to claim that because Harris hasn’t done this, that it can’t be done. This is a common error in philosophy: to insist something can’t be done, simply because it hasn’t yet. This error is most commonly seen in Christian apologetics, but even fully qualified scientists and professors of philosophy make this mistake from time to time. And an example of what I mean is that I make a case for what Born argues Harris didn’t (in my book and subsequently peer reviewed chapter). Born hasn’t reviewed my case.

The gist of my case is what I just outlined above: all moral judgments (that are capable of being true—in the sense of, correctly describing what an ideal agent would do; because we have no reason to prefer doing what an ideal agent wouldn’t do) are the combination of what an agent desires and what will actually happen. And both are scientific facts about the world. One, a fact about the psychology and thus neurology of the agent; the other, reductively, a fact of physics—e.g. it also involves facts about social systems, etc., but those all just reduce to the physical interaction of particles, including a plethora of computers, i.e. human brains. So the statement that a “science of morality…cannot derive moral judgments solely from scientific descriptions of the world” is false.

  • Born: “a science of morality, insofar as it admits of conception, does not have to presuppose that well-being is the highest good and ought to be maximized. Serious competing theories of value and morality exist. If a science of morality elucidates moral reality, as you suggest, then presumably it must work out, not simply presuppose, the correct theory of moral reality, just as the science of physics must work out the correct theory of physical reality.”

This is a spot on criticism of Harris. It’s exactly what I explained above. Harris can’t just presuppose what the greatest value is, from which all moral facts then derive, any more than Kant could (see my discussion of Kant’s attempt to propose a “greatest good” that he claimed motivated adherence to his morals, in TEC, pp. 340-41) or Aristotle (and his notion of eudaimonia, which differed in various ways from Harris’s) or Mill (and his ambiguous and ever-problematic “greatest good for the greatest number”). A moral science must somehow be able to empirically verify which of these (or which other) fundamental good is actually true. Which means Harris must work out what it even means for one of them to “be true” (such that the others are then false).

There are many ways to make moral propositions true. You could say that moral statements just articulate what the stating party wants the agent to do (“I don’t want there to be thieves; therefore you ought not steal”), and as such, all such statements are true, when they do indeed articulate what the stating party wants. Thus, on this proposal, if it’s a true fact in brain science that I really don’t want there to be thieves, then it is also a true fact of science that “you ought not steal.” But then all that the sentence “you ought not steal” means is “I don’t want you to steal.” Which may be of no interest to you whatever. Why should you care what I want? Just because I want you to not steal doesn’t mean you shouldn’t steal. Thus, making moral facts mean this, plays games with the English language. We do not in fact mean by “you ought not steal,” merely “there are people who don’t want you to steal.” Even if that’s what people secretly do only ever mean, it’s not what they want you to believe they mean. Otherwise they’d just say “I don’t want you to steal,” or “I don’t like thieves.”

No. People want “you ought not steal” to be understood as meaning something much more than that. They want it to be true that you will not steal, if only you understood why it is true that you ought not steal. Just as outside moral contexts: if I believe “your car’s engine is going to seize up unless you change the oil” I can state that as “you ought to change your car’s oil,” and what I am saying, really, is “if you don’t want your car’s engine to seize up, then you ought to change your car’s oil.” I’m really just appealing to your own values, your own desires—not mine. It’s not about what I want; it’s really attempting to claim something about what you want. “You ought not steal” is meant to mean, “really, trust me, even you don’t want to steal.” Hence if you are a scientist testing when an engine seizes from neglecting oil maintenance, my statement “you ought to change the oil in your car” will be false, and I will even agree it is false, because it is no longer the case that avoiding the engine’s seizing is what you want.

In actual practice (as in, in real use, in the real world—outside all ivory towers and armchair imaginations), when people call an imperative statement (“you ought to do A”) a moral imperative, they mean an imperative that supersedes all other imperatives. In such a way that, if it were true that you ought to do something other than A, it could not be true that doing A is moral. The moral is always and ever the imperative that in actual fact supersedes all others. That, at least, is what we want people to take moral statements to mean. But that then reduces all moral statements to empirical hypotheses about means and ends with respect to the agent’s values. “You ought to do A” then can only be true if “When fully rational and informed, you will do A.” Otherwise, it’s simply not a recommendation we have any reason to obey, and therefore it isn’t true that we ought to do A. (Because we only have reason to emulate an ideal agent, not an irrational or ignorant one.)

Thus, Harris has overlooked the fact that his proposed science of morality has to start there. Just as Born says. For moral statements to be true, in any sense anyone has enough reason to care about (the only sense that has a real command on our obedience), they have to appeal to what the agent wants above all other things, because only outcomes the agent wants above all others will produce true statements about what they ought to do (otherwise, they ought to do something else: the thing that gets what they want more). And when we debate moral questions, the issue that we really are getting at is that the agent would want something else more, if only they weren’t deciding what to want most in an illogical or uninformed way. So what moral facts really reduce to, is what an ideal agent would do: what a perfectly rational and informed version of you would prefer above all else.

That’s an empirical question. Our only access to it is through logical inference from available evidence. And that means science can improve our access to it, by increasing our access to pertinent evidence, and cleaning up errors in our logic (e.g. fallacies that result from bad experimental design, faulty statistical inferences, etc.). Thus, this is a matter for science. We just have to actually do the science.

  • Born: Harris’s “two moral axioms have already declared that (i) the only thing of intrinsic value is well-being, and (ii) the correct moral theory is consequentialist and, seemingly, some version of utilitarianism—rather than, say, virtue ethics, a non-consequentialist candidate for a naturalized moral framework.”

This is a valid criticism insofar as Harris has not, indeed, answered it. But it is an invalid criticism insofar as it is, actually, quite easily answered: all moral systems are consequentialist (see my Open Letter to Academic Philosophy: All Your Moral Theories Are the Same). Aristotle and Kant were just arguing that different consequences mattered more than someone like Mill (later) said mattered. It’s all just a big debate over which consequences matter, and which matter more. Virtue ethics says, it’s consequences to the agent’s “happiness” (whatever that is supposed to mean). Kant said, it’s consequences to the agent’s “sense of consistency and self-worth.” Mill said, it’s the consequences to everyone affected by an action. And so on.

So we’re back to asking science to find out: What does an ideal agent conclude matters most? The answer may be universal (we all, when acting ideally, would agree the same things matter more). Or it may be parochial (each of us, or various homogenous clusters of us, when acting ideally, would differ in this conclusion from others). But either way, it will be an empirical fact that that’s the case. And science is the best tool we have for finding that out (see TEC, pp. 351-56).

Harris’s Response

Harris answered Born in Clarifying the Moral Landscape: A Response to Ryan Born. I will close by analyzing Harris’s reply. But already you can see where I agree with Born, but why I still think Born is wrong—and it’s only because Harris hasn’t correctly analyzed the question of how to turn a quest for moral truth into an actual scientific research program. If you fix that error in Harris, Born’s critique is mooted.

  • Harris: “The point of my book was not to argue that ‘science’ bureaucratically construed can subsume all talk about morality. My purpose was to show that moral truths exist and that they must fall (in principle, if not in practice) within some (perhaps never to be complete) understanding of the way conscious minds arise in this universe.”

This is a very good thing of him to say. I assumed this. But many who read his book did not. I quote it here to head off anyone who wants to level that criticism at him (you should also read his ensuing examples). He did not argue that scientists will now be the final arbiters of all things moral. Rather, he argued that moral facts are ultimately empirical facts, and thus scientific facts. Whether science is looking for them or not, a scientific method of finding them is always going to be more reliable and more secure. Which means we should use as scientific a method of discovering these facts as our access to the evidence allows. For lack of means, that’s usually going to mean methods that fall short of scientific certainty, as with the rest of philosophy and public policy. Especially now, where we still have no moral science program going. He is fully aware of this. His critics need to be fully aware of it, too.

Once we’ve built the appropriate scientific research program and applied it widely for a century or so (about the length of time it has taken to get psychology as a science up to its present state, and that’s still far from perfected), scientists will indeed be able to say a lot about what is and isn’t morally true. And they will have produced more certainty on those conclusions, than anyone else will ever be able to match (whether theologians or philosophers). And even when they can’t reach scientific certainty on some fact of the matter in moral science, due to technological or empirical barriers or financial limitations or whatever it may be, they will still be able to say a lot about what’s morally false. Just as, right now, scientists can’t say for sure how life on earth began; but they can say with scientific certainty it wasn’t ghosts.

  • Harris: “Some intuitions are truly basic to our thinking. I claim that the conviction that the worst possible misery for everyone is bad and should be avoided is among them.”

Here Harris concedes the debate. He can’t answer Born’s criticism. He’s effectively just giving in and saying “I dunno, I just feel it in my gut or something; and I’ll just assume so does everyone else.” Intuition can only be giving you a correct answer if that answer can in principle be empirically verified. If it can’t be, not even in principle, then there is no possible way your intuition can know it’s true either. Harris of all people knows intuition is not magic. We do not have souls or psychic powers. If his brain is giving him that output, why is his brain giving him that output? What inputs is it using to generate that output? And is it correct?

Even if it is true that everyone agrees “the worst possible misery for everyone is bad and should be avoided” (and Harris has never even demonstrated that—even empirically, much less through science), and that’s “why” Harris’s brain generates that intuitional output (his brain, let’s say, having subconsciously done some statistics from his experience with people across history and geography), there still has to be a reason why that’s the case—and more importantly, there has to be a reason why its being the case warrants our agreeing with it. Many a firmly held intuition is actually false, and we are actually not warranted in agreeing with it. Indeed rarely more so than in debates about what really is the greatest moral good!

But that’s not even the problem. It’s a problem. Harris doesn’t deal with it. And that’s a problem. But the real problem is that this is not even what we should be looking for. If you want a scientifically true hypothesis regarding what we morally ought to do, then you have to do the same thing science does to produce and test true hypotheses regarding what, for example, we medically ought to do (for example, to surgically treat a laceration to the heart). The answer to those questions always comes at the conjunction of two facts: what we want (a study of human desires and motivations), and what produces it (a study of causal consequences). If doctors want a heart surgery patient to survive in the best possible post-op state of health, then there are certain procedures that will effect that outcome to a higher probability than others. The latter is a straightforward empirical question in natural science. But so is the former. It’s just a different question (about the desires and motivations of doctors).

Thus, if you want to discover a true proposition about morality, you have to scientifically discover both what the consequences are (what effects does stealing tend to have? what effects does refraining from stealing tend to have? and in each case, we must mean effects both on the world, and on oneself—reciprocally from the world, and internally from what it changes in you) and what the moral agents we are making these statements about want. Moral statements are, indeed, statements about moral agents. When we say “you ought not steal,” we are claiming something is true about you. And that truth, as in all other imperative contexts, is a question of what will happen, in conjunction with what you want. More specifically, it’s about what you would want to happen if you were reasoning without fallacy from all and only true beliefs. Because with such moral statements, we are recommending an action, such that if it is not already obvious to you that that’s what you’d always do anyway, then your not realizing that (and thus actually considering doing something else) must be a result of an error on your part: either of reasoning (some logical fallacy) or of information (false beliefs or missing data).

The result is that, scientifically, we don’t look for something like “the worst possible misery for everyone is bad and should be avoided.” We first look for why someone would prioritize a moral goal at all. In other words, the only way it can be true (of you, or any other moral agent) that “the worst possible misery for everyone is bad and should be avoided,” is if that is a goal that serves your desires (your desires when arrived at by an ideal process—again, meaning, rational and informed desires), and does so more than any other possible goal. Does being compassionate, for example, make your life better, such that any other life you could live instead (all else being equal) will be worse? (Or, “more likely” worse, since this is always a risk theory; no outcomes are guaranteed.) And is there an optimal degree of compassion, a “too compassionate” point, whereby your compassion is so extreme it makes your life worse again?

These are indeed scientific questions. As is the question of whether everyone (when in the requisite ideal state) will answer these questions the same way, or will some people have different answers (even when fully rational and fully informed). But always this is the only way to discover true moral propositions: by discovering what moral agents, when in an ideal state, would want most, in terms of the consequences of their actions. And then discovering what actions maximize the odds of procuring those consequences. Those two discoveries together, produce all true moral facts (as I explained in the intro sections; and prove in the literature). And this is true, regardless of whether any scientific access is available. In the absence of scientific tools, we have to rely on the best empirical tools that are available. But always, these are empirical facts we are looking for. They are discoverable facts about the world, including physical facts about moral agents (about “conscious minds” as Harris puts it).

Missing this, is where Harris has lost the narrative, and why he can never clearly outline any scientific research program to discover moral knowledge.

Here’s an example of what I mean:

  • Harris: “Ryan seems to be holding my claims about moral truth to a standard of self-justification that no branch of science can meet. Physics can’t justify the intellectual tools one needs to do physics. Does that make it unscientific?”

No. That’s not the issue here. Yes, philosophy has to build the analytical foundations of science. And it’s unfair to expect otherwise. But this isn’t an analytical question we are discussing. The issue is: what makes any imperative that Harris’s proposed science discovers “true”? Not tautologically true; empirically true. If, for example, Harris were to scientifically prove “stealing increases misery, and misery is bad,” he still hasn’t proved “therefore you ought not steal.” Because why should we care to avoid what’s bad, especially when it doesn’t affect us? In other words, how does he know someone else’s “misery is bad” in the sense of bad for us? Lots of misery may be good or even necessary (e.g. the pain of exercise; killing in self-defense). In order for a statement like “you ought not steal” to be true, it has to be true of the moral agent you are saying it is true of. But if that moral agent literally has no reason whatever to care about the increase in misery in the world, in what sense is it “true” that they ought not increase that misery?

Harris never answers this question. Yet it has to be answered if you want to turn morality into a science. You need to know what the hypothesis is: What is it that you are claiming is true about the world? If it’s just “Sam Harris doesn’t like misery and so would you please kindly not cause any,” then he isn’t talking about morality any more, in any sense anyone in the real world means. It can be scientifically true that Harris doesn’t like misery and would like there to be less of it. And science can certainly discover what actions will make more or less of it. But that’s not morality. That places no obligation on anyone else to care. It doesn’t even obligate Sam Harris to care. He might on a whim change his mind tomorrow about misery, and conclude he likes it again. What’s to stop him?

That’s not a rhetorical question (the way Christian apologists use questions like that). It’s an honest question. A necessary question. Because the answer to that question, is the very thing that makes morality true (and this is so as much of Christian morality as secular: see TEC, pp. 335-39). Even if a Christian says the answer is “God,” that’s not really an answer. Unless they mean, God will literally vaporize you if you change your mind, or will force your mind to change back, thus eliminating the existence of anyone who thinks otherwise. Beyond that, an answer like “God” needs explication. How will God stop him liking misery? Threats of hell perhaps. Something about how God made humans to only be happy if they adopt certain values. Whatever. It has to be something. But always, it will be the same fundamental thing: an appeal to what Sam Harris really most wants.

For example, suppose the answer is “God will burn Sam Harris in hell, and Sam Harris will like that less than changing his mind back about misery.” How do you know even that is true? Maybe Sam Harris will actually prefer hell. “But he could only prefer hell if he is being irrational, or not correctly or fully informed about reality.” Well, hey ho. That is exactly what we are saying, too. That moral truth follows from what you would conclude when you are not being irrational and are correctly or fully informed about reality. And the question of what Sam Harris really most wants, “hell, or aligning his values with an entry ticket to heaven,” remains fully apt even if there is no God. Hell then just becomes the consequences, whatever they are, that will befall Sam Harris or that Sam Harris risks upon himself. These will be external (e.g. social reciprocity) and internal (e.g. self-contentment). But still, it always just comes down to what he wants most for himself. Only that can motivate him in such a way that it would be true to say “Sam Harris ought not steal.” And as for him, so for everyone else.

A science of morality therefore must attend to determining what it is people really want. Science must resume what Aristotle began: the study of eudaimonia, and not as he defined it (that was just an empirical hypothesis; some other may be true; indeed some other is likely to be, as we know a ton more than Aristotle did), but as whatever it turns out to be. Meaning: science must discover the thing people want most out of life, the reason they continue to live at all, hence the one thing that would motivate them to act differently than they do (or exactly as they do, if they are already wholly aligning their behavior with what is morally fit); the one thing, whatever it is, that makes “you ought to do A” true—for you, or anyone else it is true of. And not just what people happen to desire most or say they desire most (already two things often not the same), because they might be deciding what they want most from a logical fallacy, or from misinformation or ignorance. We want to know what they would want most when they aren’t irrational and ignorant.

The study of morality is entirely driven by our desire to know what a non-irrational and non-ignorant version of us would do. So that’s what it should be looking for. It therefore must concern itself with human desires, and ultimate aims; with what it means for us to be satisfied with life and with who we are and what we have become. Because all moral truth requires knowing that. At the very least, it requires having some idea of it. You can’t just skip straight to “misery is bad.” You have to answer why anyone should care if it is. And not just care; but care so much, that they will prefer nothing else to minimizing it, that there won’t be anything else they “ought” to do. That’s the only thing that can make an “ought” statement true of someone.

So it’s almost close to getting it when Harris says…

  • Harris: “…if the categorical imperative (one of Kant’s foundational contributions to deontology, or rule-based ethics) reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start.

Spot on. Exactly my point in my Open Letter to Academic Philosophy. But notice what this means: the truth of any moral system (including Kant’s, including Aristotle’s) derives from what people think matters the most, is the most important, is so good it trumps anything else we could gain from our actions instead of it. All philosophers for thousands of years have unknowingly been admitting that the truth of moral imperatives is a function of what people really most want out of life.

But we can’t, like Aristotle and Kant did, and like Sam Harris now does, just sit in the armchair and conjure from our intuition the answer to that. Because we can be wrong. We can be wrong because we arrived at our conclusion illogically or uninformedly. We can be wrong because other rationally informed agents reach different conclusions (because, it would then have to be the case, they are physically different from us in some relevant way). We can even be wrong because, though we intuit correctly, our articulation of what we intuit is so semantically amorphous it leaves us no clear idea of what exactly constitutes misery, or when exactly it actually is bad (likewise what constitutes happiness, or when exactly it actually is good; or any other vocabulary you land on). These are things only empirical investigation can answer (What, really, exactly, is it that people want most and will always prefer to anything else, when they are rational and informed?). And science is simply a collection of the best methods of empirical investigation we have.

So, Harris gets at least that people think happiness and misery are a guiding principle in the construction of our various competing moral theories. But why do they think that? What do they mean by that? And are they right? Or are they confused? misinformed? irrational? What exactly is it they should think? In other words, what will they think, once they reason without fallacy from true information? This is what a moral science must explore. In addition to the more obvious study, of the various consequences of the available actions, choices, and values.

Ironically, Harris doesn’t apply his own criticism to himself when he says exactly what I just did, only of someone else…

  • Harris: “For instance, John Rawls said that he cared about fairness and justice independent of their effects on human life. But I don’t find this claim psychologically credible or conceptually coherent. After all, these concerns predate our humanity. Do you think that capuchin monkeys are worried about fairness as an abstract principle, or do you think they just don’t like the way it feels to be treated unfairly?”

Good question, Dr. Harris. But…how do you know you aren’t just as mistaken as you now admit even John Rawls is? How do you use science to determine that you are right and he is wrong…and not the other way around?

Of course, I’ve been explaining exactly how we would use science to do that. But Harris doesn’t even seem aware that we need to explain that. That we need a research program to do that.

  • Harris: “‘You shouldn’t lie’ (prescriptive) is synonymous with ‘Lying needlessly complicates people’s lives, destroys reputations, and undermines trust’ (descriptive).

This is false. It’s so obviously false, I can’t believe Harris really thought this through. Because there is a key thing missing here. Merely because “lying needlessly complicates people’s lives, destroys reputations, and undermines trust,” it still does not follow that one ought not lie. Thus, they cannot be synonymous. How do you connect the empirical fact that “lying needlessly complicates people’s lives, destroys reputations, and undermines trust” with an actual command on someone’s behavior that they will heed? That they will care one whit about? Much less care so much about, that they will place no other outcome higher on their list of desired outcomes? Harris doesn’t realize he needs to fill in that blank, in order to get ‘you shouldn’t lie’ to be empirically true—of anyone. You can’t just list the consequences of lying, and then conclude therefore no one has any reason to lie anyway. (Even apart from the fact that there are probably many moral reasons to lie.)

Thus, Harris doesn’t answer Born. Harris confuses himself into thinking he has. But he hasn’t.

We need to answer Born, if we want to make moral science a thing. So far as I know, I’m the only one trying to actually do that. And under peer review no less. (Not that peer review is such a hot ticket in philosophy; but it’s still better than not being peer reviewed.)

  • Harris: “There need be no imperative to be good—just as there’s no imperative to be smart or even sane. A person may be wrong about what’s good for him (and for everyone else), but he’s under no obligation to correct his error—any more than he is required to understand that π is the ratio of the circumference of a circle to its diameter. A person may be mistaken about how to get what he wants out of life, and he may want the wrong things (i.e., things that will reliably make him miserable), just as he may fail to form true/useful beliefs in any other area.”

But how can Harris claim a person “wants the wrong things”? What does that statement even mean? He needs to answer that, before he can claim it’s even logically possible to “want the wrong things,” much less that anyone does want the wrong things, even more so if he wishes to claim to know that the person wanting the wrong things isn’t himself. Likewise, how can Harris know there is no imperative to be good, or smart, or sane? Maybe in fact it is morally imperative that the sane seek therapy, that poor thinkers practice more at thinking well, that someone who has false beliefs actively seek to discover and fix them? How can Harris know in advance whether these are or are not morally imperative, if he hasn’t even begun to apply his moral science to finding out? (Even if only in a proto-scientific way, like the human race has already been doing in philosophy for thousands of years.)

Harris talks a lot about the need to empirically vet claims about morality. And yet seems keen on making a lot of claims about morality he hasn’t empirically vetted. He needs to attend to that. It makes it look like he doesn’t know what he’s doing. It makes him look like a bad philosopher.

  • Harris: “Ryan, Russell, and many of my other critics think that I must add an extra term of obligation—a person should be committed to maximizing the well-being of all conscious creatures. But I see no need for this.”

Then you can never produce any true proposition about morality. Harris is thus saying we need to use science to prove what is true in morality, while simultaneously insisting he sees to need to prove any of its results are true for anyone he expects to obey. What’s the point then? The next Saddam Hussein can also use science to produce a thoroughly coherent system of moral imperatives that best serves the goal of creating the ideal totalitarian society. Ayn Rand practically did the equivalent, with yet another completely different goal in mind. How would Harris argue we should obey whatever Harris comes up with, and not what this imaginary tyrant does, or what Ayn Rand did? “My gut feeling” just doesn’t cut it.

In actual fact, “you ought to do A” can only be a true fact about you, if in fact you will do A when fully rational and informed. Otherwise, why would you have any reason to do A? If even a perfectly rational and informed version of you wouldn’t do A, why should you? Why would you even want to? Why would you ever want to do what you know is irrational and uninformed? Harris has no answer. And that’s why he has no science. He has the idea of a moral science. But he has no idea of how to get there. And he is so stubborn, he even rejects the only way he could get there, the only line of inquiry that’s actually capable of getting him what he wants: moral imperatives that we can confidently claim to know are true statements about the people we expect to follow them.

Harris says “the well-being of the whole group is the only global standard by which we can judge specific outcomes to be good,” but he doesn’t realize that’s not true, unless each individual, when fully rational and informed, agrees that that outcome is indeed what they want most for themselves. Otherwise, it is not true that they want that outcome, that that outcome is best for them and therefore what they ought to pursue. They ought, instead, act differently. To get it to be true that “the well-being of the whole group” is what everyone across the globe values (or would, absent irrationality and ignorance), you have to tie “the well-being of the whole group” to what’s good for the individual—and not just tie it in, but show that it is more important to that individual than anything else that individual might prefer instead (again, that is, when they are rational and informed).

Otherwise, you are just making shit up. We can make up false moralities all day long, each with some seemingly glorious goal that sounds cool. But is it true? How do you know? Harris can’t dodge these questions and expect to be making any progress in moral thought.

Conclusion

Ultimately, all real answers about how we ought to behave, require solving real problems of conflicting values. It’s not always a zero sum game (even if sometimes it is). But it’s still not obvious how “you ought to decrease misery” or “you ought to increase flourishing” works out in practice when it is not possible to do any of the one without causing some of the other, as in almost all cases that’s going to happen—and that’s even if you can define misery and flourishing in any operationally testable or usable way to begin with (and Harris hasn’t). When is reducing misery less important than increasing happiness? Or vice versa? You have to work out which is more important and when. And Harris’s deepity about “reducing misery is good” just doesn’t answer that. It’s not even capable of answering that.

My point is not, like the skeptics, to say these things are unanswerable. My point is that to develop a moral science, you have to answer them. Even if it’s really hard. Like most science is.

Harris cannot dismiss Born’s point that Harris needs to finish the equation. To get any “you ought to do A” statement to be true, you can’t just work out empirically what the consequences are of different choices. And you can’t get it to be true by just asserting you know in your gut what the only true ultimate goal is. Imperative propositions can only be true when the consequences of A are what the agent actually wants most—when that agent is concluding what to want with full true information and without fallacy. Otherwise, “you ought to do A” simply isn’t true for you. Or anyone. You literally won’t have any reason to give a shit. Much less give so much of a shit, that you will sacrifice literally every other possible thing you could pursue in its stead.

That’s a really difficult thing to discover. Harris needs to realize, it’s going to be hard work actually finding that thing. He can’t just conjure it from his gut. It doesn’t just fall out of the ether like magic. People need to know why they should make any sacrifices whatever, for anything. Because all actions entail sacrifice—you always sacrifice some kind of gain, time, energy. Why should they care to sacrifice, and how much, and for what? The only way to answer that question, is by discovering what each individual really wants. And what they really want, will entail the only morality capable of being true.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading