What worldview is better for the world? That’s a question I debated with Joel McDurmon of American Vision just the other day in Houston. I’ll announce the video when it goes live. But one of the matters that came up centrally in that debate was moral theory. What worldview will “cause people to behave”? As one might put it. Here I’ll explore that question, and in the process outline all my past work in moral theory you can then dive into deeper, wherever you need.

First: The Meta-Problem

If you frame the question as “Which worldview will better get people to behave,” of course, one might then say it doesn’t even matter if the worldview is true. This was Plato’s idea, spelled out and argued in his treatise on The Republic: sell the public on a false worldview that will get them to behave. The perfect enactment of the entire blueprint he then laid out for how to do this was the Vatican. And for thousands of years now, we’ve all seen how that worked out.

In reality—as in, out here, where real things happen and don’t conform to our fantasies of how we wish or just “in our hearts” know things will happen—Plato’s project is self-defeating. It leads to misery and tyranny. You cannot compel people to believe false things; and you can’t trick them into doing it, without eventually resorting to compelling them to do it. Because you must suppress—which means, terrorize or kill—anyone who starts noticing what’s up. Which eventually becomes nearly everyone. The resulting system is a nightmare, one that will totally fail to “get people to behave.” Because it inevitably compels all in power…to stop behaving. Simply to try and force everyone else to behave.

That’s the Catch-22 that guarantees any such plan will always fail. The last thing it will ever accomplish is getting everyone to behave. Or producing any society conducive to human satisfaction and fulfillment, either, which is the only end that “getting people to behave” served any purpose for in the first place.

Worse, any system of false beliefs is doomed also to have many side effects that are damaging or even ruinous of human satisfaction, bringing about unexamined or unexpected harms and failures. Because it is impossible to design any epistemology that only conveniently ever discovers harmless or helpful false beliefs. Which means, while you are deploying the epistemology you need to get people to believe what you suppose to be harmless or helpful false beliefs, you and they will also be accumulating with that same epistemology many other false beliefs, which won’t just conveniently be harmless or helpful. “Ideological pollution,” as it were. You need a cleaner source of ideas. Otherwise you just make things worse and worse. Whereas any epistemology that will protect you from harmful false beliefs, will inevitably expose even the helpful and harmless ones as false (a fact I more thoroughly explore in What’s the Harm).

And all that is on top of an even more fundamental problem: what do you even mean by “getting people to behave” in the first place? Deciding what behaviors are actually better for human happiness, rather than ruinous of it, is a doomed project if you don’t do it based on evidence and reason. Because otherwise, you won’t end up with the best behavioral program, but one that sucks to some degree. Because you won’t be choosing based on what truly does conduce to that end, but based on some other, uninformed misconception of it. Which won’t by random chance just happen to be right. You will thus be defending a bad system.

But here’s a Catch-22 again: any process you engage that will reliably discover the behavioral system that actually does maximize everyone’s personal fulfillment and satisfaction with life, will get that same result for anyone else. You thus no longer need any false belief system. You can just promote the true one. And give everyone the skills needed to verify for themselves that it’s true. No oppression. No bad epistemologies. No damaging side effects.

Thus, the answer to “which worldview is best?” is always “the one that’s true.” So you can’t bypass the question of which worldview is true, with a misplaced hope in thinking you can find and promote a better worldview that’s false. The latter can never actually be better in practice. In the real world, it will always make things worse.

“But it won’t solve every problem” is not a valid objection to the truth, either. The truth will leave us with unresolvable problems, because we, and the world, are imperfect, and in practice governed by no perfect being. There is no “perfect solution,” because there is no perfection. All we can do is minimize the defects of the universe. We can never remove them all. Not even the most beautiful false worldview can do that. All it can do is try to hide them. And it will fail at even that.

Second: Getting God Out of It

As I concluded in my section on the Moral Argument in Bayesian Counter-Apologetics, “the evidence of human morality (it’s starting abysmal and being slowly improved by humans over thousands of years in the direction that would make their societies better for them) is evidence against God, not evidence for God.”

I then noted there that humans have three reasons to develop and adhere to improved moral systems: (1) their desire to live in safer, more cooperative societies; (2) their need to live in and thus maintain safer, more cooperative societies; and (3) due to the psychology of sentient, social animals, they will live more satisfied and fulfilled lives, the more they become in their actions and character the kind of person they admire, and not the kind of person they loathe. Only by self delusion and false belief can someone continue to be immoral and not despise themselves as the hollow and cowardly villain they’ve become.

That’s why we’ve abandoned nearly everything gods supposedly told us, discovering that in fact it’s immoral, because it ruins human happiness and conduces to no benefit we really want: from slavery (Leviticus 25:44-46) and the subordination of women (1 Timothy 2:11-15), even their legalized rape (Deuteronomy 21:10-12), to the use of murder to suppress freedom of speech and religion (Leviticus 24:11-16 and Deuteronomy 12:1-13:16), or killing people for violating primitive taboos like picking up sticks on Saturday (Numbers 15:32-36) or having sex (e.g. Deuteronomy 22:13-30 and Leviticus 20:13) or shunning people who eat bacon and shrimp tacos (Leviticus 11) or cheeseburgers or lamb chowder (Exodus 23:19). See The Will of God for some Old Testament examples; see The Real War on Christmas for some New Testament examples; and see The Skeptic’s Annotated Bible for more.

In fact, the United States’ Bill of Rights abolished the first three of the Ten Commandments, condemning them by literally outlawing their enforcement; and subsequent legislation has condemned and abolished four more as violating human rights (criminalized adultery, thought crime, compelling Sabbath observance, and forcing dues to one’s parents), leaving only the three principles all religions and cultures had empirically discovered were needed for us to enjoy the benefits of a good society long before the Bible was even written: honesty and respect for life and property. (On all this, see my article That Christian Nation Nonsense.)

Always, we’ve realized that what makes us miserable, what makes society disfunctional, we should no more do. We should condemn it, as bad for everyone else to endure, and abandon it, as bad even for ourselves to undertake. This has always been a human, empirical discovery, and never revealed from on high (nor even claimed to be, until conveniently long after the discovery was already empirically made). And we’ve been continually looking at the evidence of what actually happens to us as persons, and to society, when we push or abide by certain principles, and then deciding what to abandon and adopt as principles according to the real facts of the world, the actual consequences we observe. Thus we produce continual progress as we abandon false beliefs and adopt what the evidence shows us works. No gods needed. No gods even helping.

What “Is” Morality?

In all cultures, today and throughout history, “morals” have always meant “what we ought to do above all else.” In other words imperative statements that supersede all other imperatives. To date, despite much assertion to the contrary, we have only discovered one kind of imperative statement capable of having relevant truth conditions, and hence the only kind of “ought” statement that can be reduced to a relevant “is” statement: the hypothetical imperative. “If you want X, you ought to Y.” These statements are routinely empirically confirmed by science, e.g. we can demonstrate as empirically true what you ought to do to save a patient, build a bridge, grow an edible crop, etc.

The “is” form of these statements is something to the effect of “when you want X, know Y best achieves it, and seek the best means to achieve your goals, you will do Y.” That is basically what the information is that we are claiming to convey to you when we tell you you ought to do something. Even if our only implied motive is “we’ll beat you if you don’t comply,” we are still just stating a fact, a simple “is”: that if you don’t do Y we’ll beat you; and if you reason soundly about it, you will not want to get beaten.

But usually moral propositions are not meant to be appeals to oppressive force anymore. Because we know that doesn’t work; it always leads to everyone’s misery, as I just noted at the start of this article. Though Christians often do end up defaulting to that mode (“Do X or burn in hell; and if you reason soundly about it, you will not want to burn in hell”), the smarter ones do quickly become ashamed of that, realizing how bankrupt and repugnant it is. So they try to deny that’s what they mean, attempting to come up with something else. But no matter what they come up with, it’s always the same basic thing: “Doing Y gets you X; and if you reason soundly about it, you will realize that you really do want X.”

Whether X is heaven, or the support and comfort of God, or a contented conscience, or the benefits of a good society, or whatever. Doesn’t matter. There’s always an X. And X is always something. Because it always has to be—for any statement about what we ought to do ever to be true (and just not some disguised expression of what we merely want people to do, although even that reduces an ought to an is: the mere “is” of how we wish people would behave). But that means, moral statements are always statements of fact, and thus testable as such. They can be verified or falsified.

But moral imperatives are by definition imperatives that supersede all others. Which means, moral imperatives are only ever true, if there is no other imperative that supersedes them (as then, the imperative that supersedes them is actually what’s moral). But it is logically necessarily the case that, in any given situation, some imperative must always supersede all others. In other words, there is always something you “ought most do.” Which means moral facts always necessarily exist. And would exist, in some form, in every possible world, whether any gods are in that world or not. It’s literally logically impossible to build a world with people in it, that doesn’t have true moral facts applicable to them.

Attempts have been made to deny or avoid this for centuries, because it makes people uncomfortable to know that the only reason any moral facts can ever be true, is that following those directives will maximize the chances of our own personal satisfaction—with ourselves and our lives and the world that we thus, by our own behavior, help create. That sounds selfish. But that confuses “selfishness” (the absence of generosity and concern for others) with “self-interest” (which in fact warrants generosity and concern for others). In fact all moral systems are based on self interest. Literally, all of them. Including every Christian moral system ever conceived. It’s always only ever true, because in some way adhering to the designated commandments will ultimately make things turn out better for us, in some way or other. Even if not right away, or not as obviously as will readily convince. But that’s always what the argument is. “Look, you should do X, because things will likely go better for you in the long run if they do, trust me.”

Even Kant’s attempt to dodge this consequence by inventing what he called “categorical imperatives,” imperatives that are somehow always true “categorically,” regardless of human desires or outcomes, failed. Because he could produce no reason to believe any of his categorical imperatives were true—as in, what anyone actually ought to do (rather than what they mistakenly think they should do or what he merely wanted them to do). Except a hypothetical imperative he snuck in, about what will make people feel better about themselves, about what sort of person they become by so behaving.

Which means Kant invented no categorical imperative at all. All his imperatives were simply another variety of hypothetical imperative, just one focused on the internal satisfaction of the acting party, rather than disconnected wishes about bettering the effects of their behavior on others. Which really just reduced his whole ethics to what Aristotle had already empirically discovered thousands of years before: we will be more satisfied with ourselves, and hence our lives, if we live a certain way. And as Aristotle correctly observed, we will only reliably live that way, if we cultivate psychological habits—virtues of character—that regularly cause us to. (If you doubt any of this, see my article All Your Moral Theories Are the Same.)

Which has since been verified by science…

Science Gets Right, What Bibles Get Wrong

Whether it’s the nature of human beings, physically and mentally, or the origin and physics of the world and its contents, or the reality of magic and ghosts, or what makes governments or communities function better, or cures or alleviates illness, or pretty much anything else, science has consistently always corrected the gross and often harmful errors of Scripture. What Scripture said, has turned out to be false, a primitive and ignorant superstition. We found the evidence shows nearly everything is different than the Scriptures claimed. We should stick with the evidence. Because as evidence itself shows, it always goes better for us when we do.

Sciences already study morality descriptively, of course. For example, anthropology, sociology, and history (which, yes, is also a science, albeit often with much worse data and thus more ambiguous results: see Proving History, pp. 45-49) all study as empirical facts the many different moral systems cultures have created and believe true, and how they’ve changed those systems over time and why. But only descriptively, as a matter of plain fact, without verifying or testing if any of those systems are in any sense true. But science could do more than that, and in some cases already is. For example, psychology, sociology, economics, and political science can also investigate which moral systems actually are true. As in, which behaviors, when adopted, actually maximize the odds of individual life satisfaction and societal functionality.

Notice this does not mean “what moral inclinations we evolved to have.” That we can also study, and have studied. But that is just another descriptive science. That we evolved to behave a certain way, in no way entails that’s the way we ought to behave. Thus neuroscience, genetics, evolutionary psychology and biology, when they study human morality, they are all doing descriptive, not prescriptive science. A prescriptive science of morality requires determining, as a matter of fact, what people want above all else (when they reason without logical fallacy from only true facts, and not false beliefs); and, as a matter of fact, what actions or habits have the best chance of achieving that. The findings of this prescriptive science are not likely to be identical to the findings of the descriptive science. Because evolution is unconcerned with human happiness, and not intelligent. It therefore produces a lot of “bad code.”

Thus, before we invented better ways of doing things, when we acted simply as we evolved to be, as savages and ignorant primitives, we invented a Biblical God to tell us all sorts of things were right and good, like slavery, that we later realized were not. Since then we have empirically discovered we are not happy endorsing or allowing slavery, or women’s inequality, that we need democracy and human rights to reduce our personal risk of conflict and misery, that things go better for everyone when we cultivate respect for personal autonomy and individualism, and pursue the minimization of harm, all to generating good will, and contented neighbors. That’s all an empirical fact. And remains a fact whether gods exist or not.

The role of science in determining moral truth becomes obvious when you start thinking through how we would ansewer the most fundamental questions in moral theory. Such as, “Why be moral?” This is a question in psychology, reducing to cognitive science and neurophysics. “Why be ‘moral’ as in following behavioral system X, rather being ‘moral’ as in following behavioral system Y?” Likewise. Both questions reduce to what humans want most out of life (a fact of psychology), and what most likely obtains it (a fact of how the world and societies work). Otherwise, if your moral theory does not come with a true answer to the question “Why obey this particular moral system?” it cannot claim to be “true” in any relevant sense. And yet this is a question of relative psychological motivation. Which is a factual question of human psychology.

Hypothetical imperatives have always been a proper object of scientific inquiry. We test and verify them experimentally and observationally in medicine, engineering, agriculture, and every other field. Moral imperatives are not relevantly different. What things people want most out of life when reasoning soundly from true beliefs is a physical fact of psychology that only science can reliably determine. And what behaviors will most likely get people that outcome for themselves is a physical fact of psychological and social systems that again only science can reliably determine. We should be deploying our best scientific methods on answering these very questions. Meanwhile, we can make do with the evidence so far accumulated.

The Neuroscience of Morality

Brain science has determined that we have several conflicting parts of the brain dedicated to moral reasoning, including parts dedicated to resolving those conflicts. It’s a hodge podge of ad hoc systems that perform well below perfect reliability, demonstrating that we were definitely not intelligently designed. As social animals—as we can tell from observing other social animals all the way up the ladder of cognitive system development—natural selection has grabbed onto various randomly arrived-at ways of improving our prosociality, so we can work together and rely on each other and are motivated to do so by the pleasure and contentment it brings us. These innate evaluators and drives are imperfect and a mess, but better than being without them altogether.

There are two overall systems, which we know the location and performance of because we can deactivate them with magnetism, and they can be lost from injury, surgery, or disease.

  • One of those main two systems judges based on the motives of the acting agent (ignoring consequences). This brain center asks questions like “Did you have a good excuse?” Which was surely selected for because it helps us predict future behavior, without erroneously relying solely on outcomes.
  • Another part of the brain judges based simply on the consequences of an action (ignoring motives). This brain center asks questions like “Did you follow the rules we agreed to?” And “Did that turn out well?” And this is also helpful to maintain equity and functionality in the social system, which are essential for it to function well for everyone in it.

Because sexual gene mixing ensures wide variations across any population, some brains more strongly generate results from one system over another, so some people are more inclined to care about motives, while others are more inclined to care only about what the outcome was. But neither alone is correct. Only a synthesis of both can be—as we can confirm by observing which concerns are necessary, and what balance is needed between them, for social systems to function well—as in, to serve well the needs of every member of the system. As even Kant would put it, “What happens to the whole social system when you universalize the rule you just enacted? Is that really the result you want?”

Meanwhile, cognitive self-awareness, which accidentally evolved for many other advantages it conveys, inevitably causes us to see ourselves as we see others. So our brain centers that judge the behavior of others, also judge ourselves by those very same measures. Which is why people are so strongly inclined to rationalize their own bad behavior, even with false beliefs and lies they tell themselves, because they cannot avoid judging their own intentions, and their feelings about themselves unavoidably derive from how they see themselves, what sort of person they’ve become. Similarly, we care about the consequences of our own actions for exactly the same reason we evolved to care about the consequences of others’ actions: regardless of who caused the consequences we are looking at, they have the same effect on the social system we depend on for our welfare and contentment.

That’s how our brains evolved to “see” moral facts, which are really just social facts (facts about how social systems composed of cognitively self-aware agents work, and don’t work), and how we evolved to have the mechanisms to care about those facts. Of course, whether it’s moral facts or material facts, our biologically innate faculties are poorly designed, hence generate error and superstition as often as correct beliefs; our invented tools of logic, math, and science greatly improved on our innate faculties and thus can resolve those errors and discover even more facts (material or moral or any other), with effort and evidence over time.

I’ve already discussed this fact elsewhere (see Why Plantinga’s Tiger Is Pseudoscience). But as that analogy shows, humans also evolved a number of different reasoning abilities in our brain. All are defective and flawed. They were selected for only because they were blindly stumbled upon, and are better than their absence, generating differential reproductive success. But humans eventually used these defective abilities to invent “software patches” that fix their defects, whenever this new “software” is installed and run, via culture and education. Especially formal logic and mathematics, critical thinking, and the scientific method. All are counter-intuitive—evincing the fact that we did not evolve to naturally employ them. But all are more successful in determining the truth than our evolved reasoning abilities. As indeed we have abundantly confirmed in observation.

Therefore, we should not obey our evolved abilities, but our discovered improvements on them—in morality every bit as much as in reasoning generally. Because our evolved moral reasoning is likewise flawed and merely better for differential reproductive success; it was not selected for improving our life satisfaction, nor selected intelligently at all. So if humans want to be satisfied with living (and they must, as otherwise living ceases to have any desirable point), they also need improved moral reasoning. Just as they needed all that other improved reasoning. They need the tested and continually improved technologies. They can’t just rest on the flawed biological systems they were given. Otherwise all the self-defeating failures of those badly-designed ad hoc systems will accumulate.

This is why morality has progressed over time, rather than being perfected the moment we evolved (or, as theists would say, were imbued with) any moral faculties at all. Just as happened with our ability to reason and discover the facts of the universe in general. Evolution is not a reliable guide—to the facts of morality any more than the facts of physics—because it is not intelligently guided. But it points in correct directions, it gets part of the way there, because it is selecting for what works, among the random things tried. We now can see where it fell short, and fix the bugs, glitches, and defects in our cognitive systems, using cultural technology as a tool. Our brains evolved some useful mechanisms for this. But ultimately, reason and learning have to carry us the rest of the way.

This is how we know God had nothing to do with human morality. He did not build it effectively into our brains. And he did not teach us anything correct about how to improve on the faulty systems in our brains to discover morality reliably. We had to figure that all out on our own, and deploy it on our own, taking thousands of years to slowly fix all our mistakes and errors through trial and observation.

The Psychology of Morality

We know the most about moral truth, and in particular why people actually care about being moral, from the science of psychology, in particular child development studies and life satisfaction studies (“happiness” research). In the latter case, strong or substantive correlations exist between positive personality traits, which overlap common moral virtues, and happiness (see, for example, Correlation of Personality Traits with Happiness among University Students in the Journal of Clinical and Diagnostic Research; my bibliography on the correlation between happiness and moral virtues in The End of Christianity, p. 425 n. 31; and related works in the bibliography below). Meanwhile, what we’ve learned, and confirmed with abundant scientific facts, is that moral behavior in children starts as a fear-based conforming to authority. It is at that most childish stage driven simply by a desire to avoid being punished. If child development is allowed to proceed effectively (and isn’t thwarted by such things as mental disease or toxic parenting), this fear-based reasoning gradually develops into a sense of empathy, which begins to self-motivate. We start to care about the opinions of others, more than about merely whether we will be punished. This then develops into agent-directed self-realization: we learn to care most about the sort of person we want to be. In other words, no longer the opinions of others, but our opinion of ourselves matters most. You then start being moral because you like being a moral person.

So when we ask the question “Why be moral?” science has already answered that question. To paraphrase Roger Bergman (see the closing bibliography), when we develop into fully realized, healthy adults, we all actually answer the question “Why be moral?” the same way: “Because I can do no other and remain (or become) the person I want to be.” In other words, we must be moral, to avoid the self-loathing (or delusional avoidance of it) that entails our personal dissatisfaction. And when our sense of ourselves, and of what our actions cause in the world and its significance, is freed of ignorance or false belief, when it is constructed of a true and accurate understanding of what actually is the case, what we conclude in that state is the best behavior, will actually in fact be the best behavior.

This is because we need that behavior in the social system, to receive the benefits we need from that social system. And when we see in ourselves what we see in others, what we see in ourselves will be what is, in actual material fact, either conducive or destructive of human happiness—whether directly, or by propagating or preventing disfunctionalities in society. This is how human psychology not only developed to assist us in building cooperative societies to benefit from, but how it necessarily must develop to have that result. No social system will reliably work to anyone’s benefit, without such psychological systems in an individual’s brain. And no individual without those systems, will ever find satisfaction in a social system. Or, really, anywhere.

Note that we don’t need any god to exist, for this fact to be true. It’s always true. In every possible universe. Once you have a self-aware animal dependent on social interaction for its welfare, statistically, that animal will always benefit from this kind of psychology, and, statistically, will always suffer to the degree that it lacks it.

Game Theory, For Real

We have even confirmed all this mathematically.

Game Theory was developed to mathematically model all possible arrangements of social interaction, allowing us to test the outcomes of different strategies when pitted against any others. One should not let the name mislead; that Game Theory describes all human social interaction does not mean human social interaction is “merely a game.” But rather, that no matter what metric we choose to judge by, interactions either have no consequence, or help or hurt the agent deciding what to do. And this can be modeled in the same fashion as a game, with a score, and winners or losers (and no zero sum is entailed by this—as anyone who has played cooperative games knows, some games can make everyone a winner).

When social systems are modeled this way, in the 1970s one particular strategy was found to be the most successful against all competitors. It was called Tit for Tat. The basic strategy it entailed was to “Default to Cooperate” (in other words, always start out being kind, and revert to being kind after any alternative action), but always “Punish Defectors” (in other words, be “vengeful,” in the sense of punishing anyone who takes advantage of your kindness to harm you). Then revert to kindness when the other does.

As Wikipedia’s editors have so far put it:

In the case of conflict resolution, the tit-for-tat strategy is effective for several reasons: the technique is recognized as clear, nice, provocable, and forgiving. Firstly, It is a clear and recognizable strategy. Those using it quickly recognize its contingencies and adjust their behavior accordingly. Moreover, it is considered to be nice as it begins with cooperation and only defects in following [a] competitive move. The strategy is also provocable because it provides immediate retaliation for those who compete. Finally, it is forgiving as it immediately produces cooperation should the competitor make a cooperative move.

Iterated computer models have measured the long term accumulated gains and losses for agents who follow all different strategies, even extremely elaborate ones, when faced with agents following any other strategies. And Tit for Tat always produced the highest probability of a good outcome for agents adhering to that strategy. No other strategy could get better odds. And this finding must necessarily hold for all real-world systems that match the model. Because the gains and losses modeled are an analog to any kind of gain or loss. It doesn’t matter what the actual gain or loss is. So these computer models will necessarily match real world behavioral outcomes, no matter what metric you decide to use for success. And indeed, we’ve tested this repeatedly in observation, and that has proven to be the case.

But there’s a twist. This simple Tit for Tat strategy (“Default to Cooperate,” but “Punish Defectors”) has since been found to have flaws. Small tweaks to the strategy have then been proved to eliminate those flaws.

What we’ve found is twofold so far:

  • First, we need to add more forgiveness, but not too much, to forestall “death spirals,” what we would recognize as unending feuds, where punishment causes a returned punishment, which causes a returned punishment, and so on, forever. The interacting agents just keep punishing each other and never learn. (You can probably think of some prominent examples of this playing out in the real world.) To prevent that defect, one must adopt a limited amount of proactive (as opposed to responsive) forgiveness. In other words, someone at some point has to stop punishing the other one, and just reset back to kindness-mode. This means, rather than always punishing defectors, sometimes we should trust and cooperate instead of retaliating. Sometimes we should meet hostility with kindness.
  • Second, to prevent that feature from being exploited, we need to add some spitefulness, but not too much, to defeat would-be manipulators of forgiveness, by switching back to a never-cooperate strategy with repeated defectors. In other words, at some point, you have to stop forgiving. Once someone burns a neighbor again after they were kind to them, you no longer forgive them. Which encourages people to not do that in the first place, thus eliminating an obvious exploit.

The interesting thing here is that this was all demonstrated mathematically with iterated computer models, measuring the statistical outcomes of countless competing strategies of human interaction. And yet it ends up aligning with what humans have empirically discovered actually works. Thus verifying why these are the best behaviors to adopt. That fact we thus know is an inevitable emergent property of any social system. In any possible universe. No god needed.

Morality Is Risk Management

People want ultimate guarantees. But there are none. There is no access to a perfectly just world. This world was not made to be one. And there are no entities around capable of producing one. And this is why no behavior absolutely guarantees the best outcome. All we have is risk management. Moral action will not always lead to the best outcome for you; immoral action sometimes might instead. But the probabilities won’t match. Moral behavior is what gives you the highest probability of a good outcome. Immoral behavior, by definition, is what doesn’t.

It’s like driving while drunk: sure, you may well get home safely, harming neither self nor others; but the risk that you won’t is high. And it’s the risk of that bad outcome you ought to be avoiding. Eventually, if you keep doing it, it’s going to go badly. And statistically, it has a good chance of going badly even on the very first try. That’s why we shouldn’t do it. Not because it guarantees a bad outcome. But because the risk of a bad outcome is higher than is worth the deed. There are alternatives that risk and cost less in the long run.

Or like in the early stages of the science of vaccines: a vaccine may have had a small probability of causing a bad reaction, but the probability of acquiring the disease without the vaccine is higher. Thus you have to choose between two bad outcomes: a small probability of being hurt, or a higher probability of being hurt. It makes no logical sense to say that because you can get hurt by the vaccine, that we should not take the vaccine. Because if we don’t, we will have an even greater chance of being hurt by the disease it defends us against. The argument against taking the vaccine, entails an even stronger argument for taking the vaccine. So, too, all potentially-exploitable moral action.

Risk management also means we ought to maximize the differential every chance we get. This is the responsibility of society, to create a more just world, precisely because there is no God who did so, or ever will. Hence we should make vaccines safer—so we did. Now, the probability of bad outcomes is trivial, and the outcomes even when bad, are minimal compared to the effects of acquiring the disease. Moreover, we administer vaccines in medical settings and with medical advice that ensures bad reactions are quickly caught and well treated. This exemplifies a basic principle: a moral society should be engineered so that the society has your back when moral action goes badly for you. Just as we do with medical interventions. Comparably for drunk driving: obviously a better social system is one that rewards people who don’t drive drunk by helping them and their vehicles get safely home, thus reducing or eliminating even the incentive to drive drunk in the first place. A principle that can be iterated to every moral decision humans ever have to make.

In other words, though “good actions have a small chance of bad effects” and “bad actions have a small chance of solely good effects,” we can continue to engineer the social system so that these become less and less the case: making it rarer and harder to profit from misbehavior, and rarer and harder to suffer from acting well. We ought to do this, because our own life satisfaction will be easier to obtain in a system built that way. And rationally, there can be nothing we want more than that.

And note what this means…

We have confirmed quite abundantly that all there are are natural objects and forces—and us. That’s all we have to work with. No superman is coming to save us. We will not live forever. There is no second life. There is no future perfect justice. Our fallible brains and nervous systems, are our only source of information and information processing. And only evidence and what we really want out of life can determine what is right and wrong for us, both morally and politically. Because our clumsily built social systems, the systems we have to make and maintain on our own initiative and intelligence, and by our own mutual cooperation and consent, are our only means of reliably improving justice and well being. For ourselves as much as anyone else.

This is why no other worldview can compete with ethical naturalism. Ethical naturalism is simply the only worldview that can make any reliable progress toward increasing everyone’s ability to live a satisfying life, a life worth living, a life more enjoyable and pleasant to live. And life simply has no other desirable point than that.

So, What Then?

To really understand where I’m coming from in moral philosophy, you’d do best to just read up on my whole worldview, which all builds toward this fact, in Sense and Goodness without God. And if you want the highly technical, peer reviewed version of my take on moral theory, you need to read my chapter on it in The End of Christianity.

But if you want to start instead with briefer, simpler outlines, you can follow my Just the Basics thread online; or if you want to dive even deeper into the questions and technicalities, you can follow my Deeper Dive thread online. Both are laid out below in what I think is the best order to proceed in. Following that, is a bibliography on the best and latest science and philosophy of moral reasoning (on which all my points above are based).

Moral Theory: Just the Basics

  • My Debate with Ray Comfort: Currently the best, brief survey of what I believe and why; and of how Christianity offers no valid alternative to it, but has only a moral theory based on false beliefs and primitive superstitions instead.
  • Darla the She-Goat: Using a simple parable and colloquial approach, this essay explains why moral facts are not simply what natural evolution left us with, but are facts about what does and doesn’t work in social systems of cognitively self-aware animals, facts “evolution” is blind to and unconcerned with, though can still stumble around the edges of insofar as these things are useful for survival.
  • Response to Timothy Keller: Here I explain, in response to denials by Christian apologists like Keller, how human rights and moral systems are just technologies humans invented to better their lives, and need for their own personal life satisfaction and fulfillment, simply because of what we are, and how the world works.
  • Objective Moral Facts: A good primer on what it means for moral facts to be “objective” vs. subjective, absolute vs. relative, and so on, and what this all means for conclusions about morality when we admit there is no God.
  • How Can Morals Be Both Invented and True: If morality is just a technology, something we just invented, how can it then be “true”? What does “being true” mean of anything we invent to better obtain some goal. Understanding that is key to understanding what moral facts are.
  • Your Own Moral Reasoning: What you can do with all the above information, and all we’ve learned from philosophy and science so far, to develop the best moral system for yourself, based on evidence and reason, and the discovery of what actually maximizes your own life satisfaction, fulfillment, and contentment.

Moral Theory: Deeper Dive

  • Moral Ontology: This goes into what the material facts of the universe are that moral facts correspond to. If moral facts exist, what are they? What are they made of? In what way do they exist in a purely physical world?
  • Goal Theory Update: This will lead you to the Carrier-McCray debate, where two atheists debated what the underlying basis of moral truth is; while in the process tying up every single technical thread that debate left unresolved by the time the clock ran out.
  • Rosenberg on Naturalism: This goes into how my findings in moral philosophy defeat all more nihilistic versions of atheism, and how the latter are not based on scientific fact or valid logic.
  • All Your Moral Theories Are the Same: Demonstrates how all moral theories ever proposed in philosophy or theology, are really the same moral theory, just looking at it from a different angle. It’s consequentialism all the way down. It’s hypothetical imperatives all the way up. And there simply is nothing else.
  • The Moral Bankruptcy of Divine Command Theory: Detailed demonstration of the failure of Christianity to develop any coherent or defensible moral philosophy, but only the facade of one, a facade that merely conceals what is in fact just ethical naturalism mixed in with false superstitions.
  • Shermer vs. Pigliucci on Moral Science: Many atheists, like Michael Shermer and Sam Harris and myself, have argued moral philosophy should be transformed into scientific research, because moral facts are empirically discoverable by science. But often atheists who argue this, forget to address a key part of the necessary research program. It’s not just the science of consequences we must develop further. But also the science of ultimate motivation, discovering what actually maximizes human life satisfaction (and what doesn’t, or is only falsely believed to).
  • What Exactly Is Objective Moral Truth? This goes into more detail on why discovering moral facts makes sense as a scientific research program, against many common objections, and in particular explaining what it means for anything to be “objectively” true in a scientific, factual sense. I then expanded on this further in my responses on the same point to Babinski and Shook. And even further in response to Born, the critic Sam Harris deemed his best opponent on the point.
  • Are Moral Facts Not Natural Facts? Response to esoteric atheist theories of morality that try to claim moral facts are not facts of the natural world but some other spooky something or others that they clearly are not, in the process exposing how moral philosophy is often choked by semantic gaffes and word games.
  • Plantinga’s Moral Arguments for God: Refutation of several arguments Christian apologist Alvin Plantinga attempted in order to show that moral facts being true is evidence a God exists. In fact, it’s just evidence social animals exist with cognitively evolved self-awareness, giving them the power to think about, and thus work out, what actually makes their life better and more satisfying to live.

External Bibliography

Don’t just take my word for it. Check out what the most science-based experts are already saying and have already demonstrated:

And especially the Moral Psychology series published by MIT Press and edited by ethical naturalist Walter Sinnott-Armstrong:

-:-

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading