Last Friday the 13th I discussed the future of morality with Canadian philosopher Christopher DiCarlo. We advertised the subject with a double question: “Is Society Making Moral Progress and Can We Predict Where It’s Going?” The description was apt:

Drs. DiCarlo and Carrier will discuss whether or not we can objectively know if societies are making moral progress, who defines moral progress, and how we might reconcile the fact that different societies have different standards. Much of the conversation will also focus on the concept of free will and the freedom (or lack thereof) that humans have in making ethical decisions.

The dialogue will be conducted through the use of critical thinking and rational thought in an effort to come to a better understanding regarding the future of ethics.

Video may be available online someday. But here I’m going to discuss and expand on what I there argued. DiCarlo and I are both atheists, secularists, humanists, and naturalists, so we agreed on most everything, except a few key things that are worth analyzing here. Principally:

  1. DiCarlo is a Hard Determinist, meaning he rejects Compatibilism, and I found his defense of that stance multiply fallacious, and that it leads him to propose societal attitudes and advances I consider disturbing. I am of course a well known defender of Compatibilism in the tradition of Daniel Dennett and likewise a well-known defender of the crucial importance of individual autonomy—in reasoning, belief-forming, and decision-making. As I’ve written before on the subject of moral theory, “things go better for everyone when we cultivate respect for personal autonomy and individualism,” which mandates implementing (as we have in fact done) an empirically detectable distinction between the presence, absence, and degree of individual free will.
  2. DiCarlo thinks we must and need build an Artificial Superintelligence that will tell us what is right and wrong. In short, he wants us to submit to an AI like a secular Moses, delivering unquestionable commandments from on high (“on high” in this case being an inscrutably complex algorithm). I think this is extraordinarily dangerous and should never be contemplated. I can qualify that somewhat (as I will below), but overall I did not find his ideas about this to be realistic, implementable, or as useful or safe as he imagines. It has also of course never been needed before (we’ve made plenty of moral progress without it), and is unlikely to be achievable even in principle for at least half a century—if not centuries—rendering it a useless answer to any present question “What is genuinely the right thing to do?”

AI as Moses = Bad Idea

As I noted, we’ve made tremendous moral progress without AI, and we are nowhere near to developing the kind of AI that could help us with that, so it isn’t really a timely answer to the question of how we can tell what is and isn’t moral progress. We can tell already. How? That’s the question we need to be answering—and would need to answer anyway if we are ever to program a computer to answer it. And no computer can help us with that.

Computers are only as reliable as those programming them. And they only generate outputs based on our chosen inputs. Any human ignorance or folly you think an AI will bypass will actually be programmed into the AI. Because its core algorithms will simply encode the ignorance and folly of its designers. Even an AI that “self-designs,” and thus can “clean up” some of that mess (“a computer whose merest operational parameters I am not worthy to calculate—and yet I will design it for you!” as says Deep Thought) will only—yes, deterministically!—base its selection of “what’s a better design” on the core parameters input by its human engineers. It all goes back to humans. And their ignorance and folly. Which you were trying to get around. Sorry. Doesn’t work.

The only thing AI can usefully do for us—and I mean the kind of AI DiCarlo imagines, which is an incredible technology we are nowhere near to achieving—is “find” the evidence that a conclusion is true and present it to us so we can independently verify it. And even then we will have to enforce crucial caveats—it could have overlooked something; someone could have dinked with its code; it could have been coded with the wrong values and thus not even looked for what it should be; and so on. (The more so as DiCarlo imagines it being programmed by the United States Congress! Fuck me. I wouldn’t trust any computer that cosmic clown-car of fools and narcissists programmed. Why would anyone?)

In other words, this imagined AI will be just one more fallible source of information—even if less fallible than usual (and it might not even be that, given who programs it and decides what it will and won’t be told), we still have to make a judgment ourselves what to make of its outputs. It can dictate nothing. We can, and ought, question everything it tells us. Which leaves us to answer the question we were trying to answer from the start: How do we tell what it’s telling us is moral progress and not moral error? We have to do all those calculations ourselves anyway. So we still need to know what those calculations are. This can’t come from a computer. A computer could only ever get it from us.

Such a machine will be about as useful to moral theorists as a telescope is to astronomers: it can find stuff we might not see by ourselves, but it can’t tell us what to make of that stuff, nor that it’s being correctly represented to us, nor that that’s all the stuff there is—nor, crucially, can it tells us what we should be looking for. Telescopes hide, err, and distort. And don’t decide what’s important. We do. So we need to know how we can decide that. Even to have an AI to help us, we have to have that figured out, so we can program the AI to go find that thing we decided is important. It can’t tell us what’s important without reference to what we’ve already told it to reckon as important. So it always comes back to us. How do we solve this problem. “A computer can do it” is analytically, tautologically false.

Of course added to all that is the fact that such an AI is extremely dangerous. Because it won’t likely or reliably have human sentiments and thus will effectively be a sociopath—do you want moral advice from a sociopath?—and is more likely to be abused than used well—after all, if Congress is programming it, if a human bureaucracy is programming it, if any system can be hacked, and any system can, do you really think its inputs and premises will be complete and objective and unbiased?—so we should be extremely distrustful of such a machine, not looking up to it as our Moses. There are ways to mitigate these dangers (programming human emotions into it; making it open source while putting securities and controls on alterations to its code), but none are so reliable as to be wholly trusted—they are, rather, just as fallible as human society already is.

And then, to add on top of all that, the best such a machine could give us is demonstrably correct general rules of conduct—and I say demonstrably, because it must be able to show us its evidence and reasoning so we can (and before trusting it, must) independently verify it’s right—but that doesn’t wholly help individual actors, whose every circumstance is uniquely contexted. I gave the example at the event that I use in my peer reviewed paper on fundamental moral theory in The End of Christianity: we might be able to demonstrate it is relevantly true that “you ought to rescue someone drowning if and only if the conditions are safe enough for your abilities as a swimmer,” but the computer can’t tell every individual person whether “the conditions are safe enough for your abilities as a swimmer” or even what the conditions or your abilities as a swimmer are. Generally only you, actually being there at the time, will be able to answer these questions. Which means you have to exercise your own personal autonomy as a moral reasoner. There is no other way. Which means we need to program you to do this job of independent moral reasoning. The computer can’t shoulder this task.

Humans must be independent moral reasoners. Only each individual has efficient access to their own selves and circumstances, and a system for analyzing that data to come to anything like a competent conclusion, as we do every day of our lives. We thus must attend to programming people. Not computers. We need every individual human to have the best code in place for deciding what is likely moral and what’s not. Even to reliably judge the computer’s conclusions correct, humans need this. They need it all the more for every decision they will actually have to make in their lives.

So we shouldn’t be talking about AI here. That’s a trivial and distant prospect. Maybe in ages hence it will be a minor asset in finding general rules we can independently test, one more tool among many that we tap in reaching conclusions on our own. But we will still always have to reach those conclusions on our own. So we still have to answer the question: How?

We Must Properly Input Human Psychology

Hard Determinists, like DiCarlo and Sam Harris, have a beef with human emotions. They think determinism will convince us to abandon anger, for example—which entails abandoning, by the exact same reasoning, love. And every other emotion whatever. They think “hard determinism” will get them out of the way so we can make “perfectly rational decisions.” False. Scientifically false. And analytically, tautologically false. Without emotions, we would make no decisions at all. Emotional outputs are the only thing we make any decisions for. They are therefore inalienable premises in any line of moral or any other kind of decisional reasoning. (See my discussion of the actual logic and science of emotion in Sense and Goodness without God, § III.10.)

I may have mentioned at the event the Miranda story from the movie Serenity (based on the Firefly television series): the government (the same one that DiCarlo would have design his AI Moses, take note) thought it could improve society by chemically altering humans to no longer have emotions, with the result that they all just sat in their chairs at work, did nothing, and starved to death (while a few reacted oppositely and became savage berserkers, of course, which was more convenient for an action movie plot). This is what the world would be like if we “got rid of emotions.” Reason is a tool, for achieving what we want, which is what pleases and does not disturb us. Without emotions, there is nothing to desire, and thus no motive to need or use that tool, and no end for which to use it. We just sit in our chairs and starve to death.

Emotions are not merely fundamental to human experience, and necessary for human success—we cannot program emotions out of us, like anger and fear and loathing and disgust, and still consider ourselves human—but they also evolved for a reason. They serve a function. An adaptive one. You might want to examine what that function is, before throwing out the part. “I don’t like how noisy this timing belt is in my car, so I’m just going to toss it.” Watch your car no longer run.

We need negative emotions, like anger and fear and loathing and disgust, for exactly the same reason we need positive ones, like love and acceptance and attraction and enjoyment. And it’s not possible to argue “We should abandon anger, because determinism” and not also admit “We should abandon love, because determinism.” No version of determinism that would lead to the one conclusion, will not also lead to the other. Which is maybe why you should rethink your conception of determinism. It’s fatally flawed. It is, rather, looking a lot more like fatalism. Which, unlike determinism, is false. As false as its mirror image, Just World Theory.

The challenge is not to suppress or argue ourselves out of our emotions, but to get those emotions to align with reality rather than false beliefs about reality. Emotions are simply value-evaluators. As such, they can be correct—and incorrect. Just like any evaluator. For example, fear must be targeted at the genuinely fearsome (like, say, a rampaging bear), not what actually poses no commensurate threat (like, say, a hard-working Mexican running the border to find a job). Excessive fear of things that aren’t really that dangerous is a false emotion because it is activated by a false belief; but fear of sociopaths, for example, is not excessive but aligned with reality and thus is a true emotion we ought to heed—and in fact depend on to survive.

The same applies to moral evaluation: to be motivationally and decisionally useful and rewarding, feeling admiration and respect and trust needs to be triggered by a genuinely admirable, respectable, trustworthy character and not triggered by a deplorable, unrespectable, untrustworthy character. If it misfires, it’s bad for us. But if it doesn’t fire at all, it’s bad for us. Hence the solution is not “getting rid of it.” The solution is programming it. Giving it better software. Making it work more reliably. Putting checks in place that allow us to verify it’s working properly. Teaching people how to reason. Which means, reason by themselves, autonomously.

Hate, like pain, serves to motivate avoiding or thwarting or fighting dangerous people; we need it. Love, like pleasure, serves to motivate drawing ourselves to benevolent people; we mustn’t apply it to dangerous people—they deserve our fear and loathing, as those emotions motivate correctly useful responses to them, preventing us from foolish or self-defeating actions. That love is “deterministically caused” is completely irrelevant to its function or utility. Likewise that hate is “deterministically caused” is completely irrelevant to its function or utility.

DiCarlo imagines a world where no one, not even psychopaths and people of despicable and dangerous character, deserves our loathing or disdain, but only our compassion and sympathy. This is false. They deserve our pity, and compassion insofar as we ought not dehumanize them and treat them barbarously. But they are dangerous. We need to be afraid of them. We need to not like them. It is only loathing and dislike of a bad person that causes us to avoid becoming one ourselves, and motivates others to avoid such as well. Which is why bad people usually invent delusional narratives about themselves being good people, rather than admit their actual character and behavior, so as to avoid any motive to change it. That’s a problem for how to cause bad people to be good, or children to develop into good people and not bad; but that’s simply a design issue. It does not justify treating good and bad people as all exactly the same. They aren’t.

“How” someone got that way is irrelevant. It does not matter to the fact that a rampaging bear is dangerous “how” a rampaging bear came to be one. It can matter only structurally, outside the context of a currently rampaging bear, so as to reduce the number of them. But faced with a rampaging bear, that’s irrelevant. You aren’t a time traveler. You can’t change the past. You need useful rules for dealing with the present. Our emotional reaction must be to the facts. Not to a fantasy about rampaging bears being just the same and deserving of the same reactions as a cuddly puppy, simply because they are “equally caused” to be what they are and do as they do. Wrong.

Excess sympathy can cause us to make bad decisions, just as excess or misdirected anger can, resulting in bears mauling us or others, when we could have gunned them down and saved lives—an action that requires a hostile emotion to motivate. We can feel pity for the poor bear, as it “knows not what it does.” But we also need to feel rage and fear to stop it. Like Captain Kirk, “I don’t want my pain taken away! I need my pain!” The same is so for every emotion.

This is especially true in a moral world. We need moral outrage. It is the only thing that motivates real change. Without it, we sit at our desks, doing nothing. The problem is not the force of moral outrage. The problem is when that outrage is misdirected (or out of proportion). Confusing these things seems a common folly of Hard Determinists. They do not believe emotions are ever “misdirected” or “disproportional” because they are all “equally caused.” Factual reality says otherwise. They are all equally caused. Yet some are correctly directed and attenuated, and others not. That is the only distinction that materially matters. That’s the dial we need to causally adjust. Not the dial for “how much outrage is ever caused,” but the dial for “what outrage is caused by” and “how much outrage for each given set of facts.” The former dial must never be set to zero. While the latter dials must be ever tweaked toward “zero error.”

Is Everyone Insane?

Who decides what the “norms” should be? DiCarlo kept referencing “the ones that are stated” as the ones that ought govern. But merely “the ones that are stated” is not an answer—for we have competing moralities, some loathesome: like the “stated norms” that women must wear a burqa (Quran 24:31) and not have positions of authority over men (2 Timothy 2:12). So that doesn’t answer the question.

A similar question that came up is who decides whose DNA or brain gets meddled with and in what ways? For DiCarlo kept recommending this: a technology of neuro-reengineering to “fix” immoral people. But as I pointed out, such a technology is extraordinarily dangerous. It will be abused by people in thrall to false and loathesome moralities or false beliefs. Which means particularly governments—since those just are, as Shepherd Book notes, “a body of people; usually, notably, ungoverned.” Countless dystopian science fiction films and novels have explored this very outcome. Who decides which brains get changed and in which ways? Who decides who needs to be “fixed”? So proposing this technology only complicates, it does not answer, the very question we are asking.

DiCarlo suggested maybe it could be voluntary. But that doesn’t help. It doesn’t help with the problem of what we do with the people who do not rationally admit they are “malfunctioning” (as DiCarlo put it), which will actually be most immoral people; it doesn’t help with the problem of how even an individual can reliably decide what they should get fixed (“Shit, I’m gay. Oh no! That’s evil! I better get my DNA altered at the local chemist!”); and it doesn’t help with the meta-question governing both circumstances: How are we deciding what counts as a “malfunction” in the first place? What is moral? And why?

In reality, there is no way to persuade someone they are immoral other than causing them to realize they are immoral. But that very realization will have the causal effect of changing what that person does and thinks; it will already make them moral. They won’t need to “alter their DNA” or “modify their neural circuitry.” So we already have this technology. The most we can do is improve on it (in all the ways we morally educate, both children and adults; the ways we provoke self-reflection; the tools we give people to do that; and so on). And that we are already capable of doing and should be doing.

This idea of genetic and neural reengineering is largely useless. It can’t help us now (as no such technologies exist), it is unlikely to help us in future (as reliable judgments on what to change require, circularly, reaching a correct judgment about what to modify before thus modifying how judgments are made), and can only help us with ancillary functions (like, improving our ability to reason, attenuate emotions to correct causes, and so on). It can’t answer the questions of what is a malfunction, how we know something should be considered a malfunction, and so on. Nor can it replace the system we already have for social self-modification: independent human reason. People can change themselves, through reflection and education, far more effectively and efficiently than geneticists or neurologists will ever be able. They can at most improve the innate tools those people use for that self-reflection and education.

DiCarlo kept using the example of pedophiles, identifying the problem with them as being an immutable desire to have sex with children; which he therefore proposes can be fixed by genetically or neurally “removing” that desire. And pedophiles—in fear of prison, let’s say—will be motivated to voluntarily go in for the fix. But this is already possible. Even apart from chemical castration, an extreme mutilation. Because sex offender therapy is some of the most successful in the world, with lower recidivism rates than any other crime. We can already reprogram these people. We just need to do it. As in, actually implement and pay for the program. Which uses their already-innate powers of autonomous reasoning. No genetic or neural mutilation required.

The problem with pedophiles is not a desire to have sex with children. Any more than the problem with murder is a desire to kill people or the problem with lying is a desire to avoid the consequences of telling the truth—desires, note, all human beings feel at some time or other, yet most don’t act on—as if we could solve all moral issues by simply removing all the desires that would motivate misconduct. That is impossible; as I just explained, we need those emotions, so we can’t erase them. Their existence isn’t the problem. It’s how they are being directed or overridden.

People can self-govern. If they couldn’t, society would not even exist. So we know they can do it. That’s the technology we need to be improving on and using to this end; it’s already installed, and we already know what best employs it. And those who fail at it, we know from the science of psychology, usually do so because of erroneous beliefs, not desires. Pedophiles almost always have false beliefs that justify and thus motivate their molesting of children; such as that children are adult-minded and can consent and like it (all false), or other delusional or irrational assertions. Remove the false beliefs, and the behavior stops (see links above). Just as with nearly every other human being.

We have ample data from the kink community: nearly everyone in it understands the difference between wanting to do a thing with the other party’s consent and doing it without, and correctly governs their behavior. Millions—literally millions—of doms and sadomasochists don’t go around beating people without consent for their own pleasure. They act benevolently. Despite their desires. Because their desire to be good people exceeds and overrides their desire to please themselves—which self-governance is practically the definition of an adult. Pedophiles would act likewise. But for their false beliefs.

There are the insane, people who cannot control their actions despite desperately wanting to—but these people are rare. They are not normative examples of human beings. Note, for example, the difference between pedophilia as a mental illness and merely having a “pedophilic sexual interest,” as explained in Psychology Today. Not every bad actor is insane; in fact most are not. We therefore cannot solve bad acting by medicalizing it, by calling everyone “insane” and then drugging or cutting them up to “fix” it. This is the nightmare scenario DiCarlo imagines we should aim for; he literally could not comprehend the idea that only a few people are insane. He literally kept insisting everyone is insane—that there is no difference between an average bad actor and a crazy person. Sorry, but there is a difference—a scientifically documented difference. Our answers for society must account for that difference. Not ignore it.

The non-insane must be relied upon and treated as autonomous decision-makers whom we must cause to improve through education and persuasion. The actually insane who cannot self-govern we must lock up and treat only because we have no option left. Just as the sane who nevertheless end up in prison for bad acting should be targeted with education and other science-based techniques of reform.

I do agree that insanity might have cures in genetic and neural reengineering some day. But such treatments must be regarded the same as all medical treatments: professionally administered to legitimately diagnosed persons with informed patient consent. For example, if an insane pedophile, someone who experiences only constant distress at not fulfilling their sexual desires, won’t recognize this as a medical problem requiring treatment (and thus seeking it), then they are choosing that we treat their behavior criminally rather than medically. We ought to respect their choice.

The result either way protects society, deters crime, and can potentially reform the bad actor—and personal individual autonomy is respected. We thus rely on the individual decisions of autonomous agents, and decide outcomes by what choices they make for themselves, in light of how society must then respond to defend itself. “I don’t want to remove these distressing desires, I want to go to prison instead” is a fair decision we should let people make. Until we can persuade them to decide otherwise—that in fact “prison plus distressing desires” is worse than “freedom minus distressing desires.” Therapy in prison could be deployed to that end, the same as it would for a sane person (as Cognitive Behavior Therapy helps everyone).

This is all stuff we already know. It isn’t revolutionary.

Answering the Question

We broke the topic down into ten questions that build on each other. I’ll close out by giving my completed answers to each, each of which I only briefly touched on at the event.

1. What is morality? This is an analytical question: we simply decide what it is we are looking for. Then we can ask what satisfies that condition, which is then an empirical task.

You can look for things like “What people want other people to do” or “What a culture says a person should do” but these definitions of morality aren’t really what we usually want to know. It doesn’t help to know what culture says; we want to know if what a culture says is right or if one culture is right about this and another wrong. It doesn’t help to know what people want other people to do; we want to know if other people have a good reason to do that or not. When we really think it through, to get at what it is we really want to know, we find it’s one single thing:

What actually ought we do above all else?

Not what culture says we should do or what people wish we would do—because it doesn’t follow we should actually do any of that. The answer always depends on the goal each and every person has, that we ourselves have, if we want to know whether we actually ought to do a thing. And this always comes down to: What do you want to achieve above all else? What kind of person do you really want to be? How can you be most satisfied with the life available to you?

And not merely that, but only when you are reasoning without fallacy from true facts about yourself and the world—because all other beliefs are by definition false. And what we want to know is what’s true. What we actually ought to do. Not what we mistakenly think we ought to do. This I’ve already covered elsewhere. But it comes down to following the procedure first developed by Aristotle over 2300 years ago:

Ask of any decision you aim to make: Why do you want to do that, as opposed to something else? Then ask why you want that. And then why you want that. And so on, until you get to what it is that you want for no other reason than itself. That is ultimately what you really want, and want more than anything else. Because all other desires are merely subordinate to it, instrumental desires that you only hold because you believe pursuing them will obtain the thing you want most, the reason you want anything at all. (Which is why the pertinent cause of bad acting always comes down to false beliefs.)

Empirically we find the answer will always be: to be more satisfied with who you are and the life you are living, than you would be if you acted or decided differently. As well explained, from scientific evidence, by Roger Bergman in “Why Be Moral? A Conceptual Model from Developmental Psychology” (Human Development 2002). Morality is thus a technology of how to best realize our most satisfying lives and our most satisfying selves without depending on false beliefs about ourselves or the world. You can test this conclusion yourself by asking: After honest and full consideration, is there anything you actually want more than this? (And why? Pro tip: the answer to that question can’t be to obtain greater personal satisfaction; as then you’d just be proving there is nothing you want more than that!)

2. What about cultural relativism? Is morality merely the random walk of culture, such that no culture’s morality is justifiably “better” than any other? Or is there a better culture you could attain to? Are some cultures even now objectively better than others?

Of course this goes back to goals. What is your metric for “better”? What makes one culture “better” than another? What are we measuring—or rather, what are we asking about? What is it that we actually want to know? And the answer, with respect to morality, is what we just found: Which cultures’ moralities increase everyone’s access to personal satisfaction (with themselves and their lives), rather than thwart or decrease it? And can we construct an even better morality by that measure than even any culture has yet produced?

There are objectively true answers to these questions, because this metric is empirically observable and measurable independently of what you believe the answer will be. And as there is really nothing anyone wants more, no other metric matters—any other metric, we will by definition care less about than the thing we want most instead. So if morality is such a technology, of “how to best realize our most satisfying lives and our most satisfying selves without depending on false beliefs about ourselves or the world,” then there are objectively true and false conclusions about what’s moral.

Just as with any other behavioral technology, like how to perform a successful surgery or build a sturdy bridge: once you have the goal established, objective facts determine what best achieves it or undermines it. Moral systems that hinder or destroy people’s life satisfaction or that require sustaining false beliefs are failure modes; no one who really thought about it, would want that outcome. So anyone acting otherwise is acting contrary to their own interests.

Which moral systems do that, or which facilitate life satisfaction instead, is an objective fact of the world, of human biology and psychology. In just the same way that Americans who think spending vast sums of money to stop immigrants and refugees will solve their problems—and thus back candidates who support that but block or remove all the social services that would actually solve those same Americans’ problems—are acting contrary to their own interests; because they have false beliefs about what will best serve their interests. If they were aware of this, they’d prefer to have true beliefs about what will best serve their interests and act accordingly.

3. What does it mean for morality to be “objective” morality? Or is morality all subjective? What does that even mean? What’s the difference?

Moral feelings are, like all feelings, subjective, but their existence is an objective fact. What we want out of life is felt subjectively, but is an objective fact about humans, about us. That all humans want that above all else is an objective fact about humans—no amount of disbelieving it can make it not true. So, yes, morality is objective.

Not only is what we all most want an objective fact, what best achieves that is an objective fact. For example, if you think pursuing excesses of wealth will lead to the most satisfying life, empirical evidence demonstrates your belief is false; making yourself into the kind of person you like rather than loathe, and finding a life that satisfies you regardless of income (once it meets all your basic needs) we know is more effective. These are objectively true facts of the world. And thus so is what we should do about it.

4. How could we resolve moral disagreement? Which means disagreement among individuals, and also between cultures and subcultures. When we disagree on what’s moral, how can we find out who’s right?

Of course we must first seek community agreement that the answer to this question must be based on true beliefs, and that morals must follow logically rather than illogically from true beliefs. Societies that won’t agree even on that tend toward collectively miserable conditions; those of us who agree should thus exit and repel such bad communities. Progress toward real knowledge about anything, morality or otherwise, is only possible in a community that agrees only justified true beliefs are knowledge.

Once we have a community that agrees the true morality can only be what derives rationally from true beliefs, disagreement is resolved the same way as in any other science: evidence, and logical demonstration from evidence, will tell us what is good or bad. Which means, what actually will tend toward everyone’s satisfaction or dissatisfaction. This is how moral progress has been made in the past: persons who see that a moral claim (like, that slavery is proper) is based on false beliefs or does not logically follow from any true beliefs, then communicate this discovery. That causes more people to see the same thing. They then collectively work to spread that causal effect to yet more people, or to oppose the dominance of people who resist it (resisting facts and logic).

As an empirical fact we know younger generations are less set in their beliefs and thus more malleable and thus more open to change. They are less invested in false beliefs, and thus more able to abandon them. New generations afterward then increasingly grow up being programmed with the new moral understanding, so that it then becomes the norm. This is why moral progress is slow. It takes several generations to propagate through an entire society.

For instance, once moral advocacy for the morality of being gay spread widely enough, more people spoke openly of it and were more openly gay; younger generations then grew up seeing there was nothing wrong with gay people, that all the beliefs sustaining their oppression were false; and thus they rejected those beliefs and adopted moral conclusions in line with the truth. The generations after them are now being taught this new moral knowledge as their baseline standard. This is objectively measurable progress—from false beliefs to true, first about the world, then about morality. For moral truth is a direct consequence of truths about the world.

5. Are people free to choose their morality? In one sense the answer is no, in that people are caused to believe what they do by what culture they are programmed by, and how their brains are built, and other happenstance facts of what experiences they encounter, and ideas they happen to hear, and so on. But in another sense the answer is yes, for we see it happening all the time: moral progress has occurred precisely because people can jailbreak their own cultural and biological programming, hack their own software, and change it.

Humans are information processors capable of analyzing, criticizing, and making decisions about what to believe or how to behave; they are limited by their causal inputs, but they are not random automatons. They think. They therefore can make choices, and thus change. Still, even that requires the right causal circumstances. But all that means is that we need to encourage and spread those causal circumstances. People don’t become moral but by being taught and educated and given the skills they need to discover the truth and encouraged to use them. But they aren’t just servers we can pop open and rewrite their code; people must analyze and judge the information you give them and thus decide to rewrite their own code.

The causal features of culture we must encourage and sustain precisely because it makes that more and more possible include (but are not limited to):

  • Social endorsement of criticism and open communication, i.e. freedom of thought and speech.
  • Social endorsement of empiricism, reason, and critical thinking as virtues necessary to a respectable person and a safe and productive society.
  • Social disparagement of anti-rational memes, i.e. beliefs and ideas whose function is to stop or dissuade free thought and speech, open criticism, empiricism, or critical thinking.
  • Enacting these endorsements, e.g. social investment in teaching empiricism, reason, and critical thinking skills universally, and in exposing and denouncing anti-rational memes.

In such an environment, the human ability to change their mind and align beliefs and morals with the truth is increased, and thus moral progress is accelerated. Yes, it always does depend on convincing people to choose a different way of living or thinking. “Convincing” is code for “causing.” But not by coercion (or anti-rational memes), but by persuasion aimed at activating their own internal machinery for evaluating ideas, by appealing to rational thought.

This is different from just seizing people and altering their neurology or DNA without their consent (or even with it), for example. Praise and blame causally changes the world; therefore it is never a mechanism we should or even can do away with as DiCarlo incorrectly argues. And again, appealing to the insane changes nothing; the insane are not normal cases, because the insane by definition cannot reason. The sane can. We therefore must rely on their abilities of reason. We cannot treat everyone as insane and expect to have a functional society. And history shows: people can reason their way into a new and better morality. Moral progress would never have occurred if they didn’t.

6. Can people be persuaded to change their morals? Or are they inexorably programmed by their upbringing and biases?

The short answer is: yes to question one; no to question two. Yes, there is programming and bias that blocks many from reasoning well and realizing the truth, locking people in obsolete patterns of belief (especially when they are full of anti-rational memes), thus slowing societal progress to a scale of generations or centuries. But there is always a substantial percentage of people that this doesn’t suffice to suppress; and by winning that percentage generation after generation, the old ideas gradually become displaced. Historical evidence shows all moral progress proceeds this way.

People can be persuaded to change their morals by their own autonomous reasoning (self-realization, producing the first movers for change). People can be persuaded to change their morals by public or peer-to-peer persuasion (caused by hearing new ideas and criticism of old ideas and evaluating them rationally, producing the first adopters of new moralities). And people can then be persuaded to adopt new morals by the same cultural programming that installed false moralities in their predecessors (being raised among accepters of the new ideas, producing regular adopters of the new morality).

7. So, is there moral progress? And does it require God?

Yes to question one. No to question two. That there are conclusions about best behavior (“morality”) that improve everyone’s access to being more satisfied with who they are and the life available to them is a fact. That improvement along this metric has happened is a fact. Neither involves, implies, or requires any god to exist or be involved in any way. There is likewise no evidence of any God’s involvement in any of the moral progress we have made. To the contrary, the evidence matches his complete disinvolvement.

The excruciating slow pace of that progress actually proves no god has been involved. A god would be better at constructing our brains to more easily discover and admit sound moral reasoning, and a god would be better at communicating and persuading, and thus generating the necessary cultural conditions for far more rapid adoption of moral advances. The absence of these things is thus evidence God does not exist!

8. Who defines moral progress? By what metric? How would they justify themselves to us? Why would we agree with them?

We’ve of course already answered this from question one: the metric is what behaviors, if people followed them, would realize for themselves the most satisfying lives available to them. And this is defined by the very thing we are asking: how we should behave. Once we discover what it is we want above all else, our metric is established. And it is thus established by reality. Not by any authority.

To do even better at discovering true moral facts, we have to study individuals to discover what leads them to respect rather than loathe themselves (when they are reaching only logical conclusions from true beliefs), in order to determine what will actually help people achieve greater satisfaction with who they are and the lives they are living. We have to study what behaviors statistically produce the best outcomes for every individual by that metric.

For moral reasoning in every individual, we must ask: What happens when we replace false beliefs with true beliefs? And then what happens when we replace fallacious inferences with logically valid ones? The result is that evidence and logic trump all authorities: not a who, any more than science is governed by a who, but a collective of critical empiricists seeking consensus, and presenting the evidence and logic so everyone can, should they wish, independently verify the conclusion follows.

9. What moral progress have we made by that metric? And why can we say that is, in fact, progress? What justifies calling some things progress, and other things regress?

At the event I listed four general areas of real, verifiable moral progress that has been made in society, spreading across the earth:

  • Equality (e.g. ending the subordination of women; denouncing the role of social class in determining rights; etc.)
  • Autonomy (e.g. ending slavery; promoting liberty; developing doctrines of consent in sexuality, medicine, etc.)
  • Empiricism (valuing evidence over authorities)
  • Acceptance (ending bigotries, e.g. homophobia, racism; increasing tolerance for alternative cultures and individuality; etc.)

We can present empirical evidence that each of these has increased access to personal and life satisfaction: certainly increasing the number of members of society who can access it, but also increasing ease of access for everyone else, by reducing self-defeating attitudes and behaviors. Opposition to acceptance, equality, and autonomy has societal and personal costs, now done away. The effect is extensively measurable (see Pinker’s Better Angels of Our Nature and Shermer’s The Moral Arc). Meanwhile opposition to empiricism has societal and personal costs that hardly need explication.

Moreover, each of these four arcs of moral advance was the inevitable outcome of advances in factual knowledge of people and the world. As false beliefs were replaced with true, these were the conclusions about morality that arose in result of switching out the premises.

In addition, moral progress still advances along the same lines of previously empirically established moral truths, namely the discovery thousands of years ago of the supremacy of the values of compassion, honesty, and reasonableness (which latter includes such subordinate values as justice, fairness, cooperation, and rational compromise). Indeed, these underlying values drive the newer four arcs of moral advance.

10. Can we predict future progress? If we can identify past progress up to now, can we predict future progress? Can we predict where it’s going?

In large part no, because we don’t know yet what beliefs we have that are false, the correcting of which will lead to different conclusions about what’s moral. Just as with all other sciences. If we already knew what we were wrong about, we wouldn’t be wrong about it anymore. But in small part yes, just as in some sciences we can speculate on what more likely will turn out to be true or false in future, so we can in moral science.

In past movements toward progress, we see a small number of people communicating what they notice is false and thus should change; and also others who make false claims about this. We can tell the difference, though, by noticing which ones are basing their proposals on reliable claims to fact and sound reasoning. We can likewise see the same now, and so the parallel holds: a small number of persons claiming our morals should change; they disagree on what the changes will be, so even if some are right, some must be wrong; but some are using evidence and reason better.

Of course there may be true moral conclusions that should replace current beliefs that none of these competing moral changers have yet perceived. But of those currently fighting for change, who are thus at least proposing hypotheses for changing out our morals, we can spy the difference by noticing which ones are basing their claims on reliable claims to fact and sound reasoning. Which also means, if you might notice, that our ability to predict future moral progress is precisely what causes that moral progress, by increasing the pool of advocates from first movers to first adopters, and thence to parents and educators, and thence into future societal norms.

For example, the American gun rights debate, when analyzed, finds those promoting the morality of widespread, unregulated ownership of assault rifles are basing their position on false beliefs, whereas those advocating the immorality of that appear so far to have a conclusion that follows logically from true facts (even after we eliminate all claims they make that are false). It does not follow that all (or any properly regulated forms of) private ownership of assault rifles is immoral. But the current regime of unregulated and even promoted dissemination—even the underlying “guns are manly and cool” culture—is very probably profoundly immoral, as being supremely irresponsible and dangerous.

Other examples abound. Personally, I think I can predict future developments toward ending the moral assumption of monogamy, toward increasing acceptance of radical honesty, toward better treatment of animals in industry (but not leading to veganism or even vegetarianism as moral norms), and toward more respect for autonomy and acceptance and equality in the sex industry, just as we are now seeing already starting to happen in the recreational drug industry (just as already happened for alcohol). These ideas are currently only at the first mover or first adopter stage. But I anticipate within a few generations they will be cultural norms. And will represent moral advances by one or more of the four moral arcs I identified above, or their previously established underlying values. And we will rightly look back on them as such.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading