I have a new peer-reviewed publication in philosophy: “Objective Moral Facts Exist in All Possible Universes,” Religions 16.8 (2025).
This consolidates my previous peer-reviewed work on metaethics (“Moral Facts Naturally Exist (and Science Could Find Them,” in The End of Christianity, ed. John Loftus, Prometheus 2011) and subsequent blogging and debates into a new peer-reviewed demonstration that moral facts are, in fact, logically necessary properties of rational agents and therefore always exist in all possible worlds (even in worlds without rational agents they exist as the inalienable properties of potential rational agents). God is therefore unnecessary to ground moral facts, and moral facts derive fundamentally from the conjunction of rationality and the situational facts of any potentially moral decision, and therefore are empirically discoverable as such.
It’s open source (at least for now, so grab it while you can).
Theists will be outraged and sputter guffaws. But this is serious. Just try to find any actual flaw in the argument. And think about what that means.





Thank you, Richard, for such a thoughtful and well-written piece.
“Just try to find any actual flaw in the argument.”
“The first option is to reject the existence of true moral facts and settle upon some other conception of morality (emotivism, prescriptivism, error theory, etc.). But that approach struggles to make sense of why anyone should behave in any particular way at all (the entire point of discerning moral facts).”
One of the flaws in your argument is that the above statement is false, there is no such struggle to make sense of why anyone “should” behave according to XYZ subjective morality. There is no “should” at base. “Should” is just a common language shorthand for the emotions of ought, the subjective and highly complex workings of the human brain that are mostly unknown to us and far too complex to express in ordinary language.
On subjective morality “should” is just the aggregate of the workings of the human brain, which varies widely from individual.
“Premise 2. It is necessarily the case that for any rational agent there will always be true hypothetical imperatives that supersede all other imperatives.”
Your premise 2 is false with respect to objective morality generally. I might have superseding hypothetical imperatives for me, and you might have superseding hypothetical imperatives for you, but in the event we disagree there can be no means to determine which of us is objectively correct with respect all agents generally.
This problems is related to the Euthyphro dilemma, as in the context of a pantheon of gods, if the pious is pious because a god loves it, then what if two different gods love opposing views of what is pious?
Objective morality is logically impossible. Plato proved that some 2300 years ago. Arguments of the form in the Euthyphro dilemma apply to any supposed source of objective morality, not just the gods.
We don’t need objective morality, we have never had objective morality, and objective morality is logically impossible. Why not simply adopt what is clearly in evidence? The clunky process of evolution has given rise to a population of animals with complex brain processes that have a variety of experiences including each individual experiencing what is generally reported as a sense of “should” or an emotion of “ought”, with no possibility of ever finding a means to objectively determine who is right and who is wrong in the cases where those emotions are in conflict between individuals.
Sorry, but it sounds like you did not read the article.
The paper spends pages proving there is. You are not engaging with any of that evidence but just claiming those pages don’t exist in the article. Why?
This is disproved in the article. Why are you ignoring the entire article?
This is disproved in the article. Why are you ignoring the entire article?
There is a whole page in the article on this. Why are you acting like that whole page doesn’t exist?
No, he didn’t.
But two thousand year old philosophers can’t have refuted things that wouldn’t exist until the 20th and 21st century anyway. So citing him is pointless.
My paper not only disproves the claim of impossibility (indeed there is a whole paragraph specifically on the impossibility of its impossibility), it cites several peer-reviewed publications by several modern experts establishing objective moral facts are possible and indeed probable.
You are ignoring all of this. Why?
I disprove this in the article. Why are you ignoring the entire article?
I cite several other experts who disprove it as well. Why are you ignoring them as well?
Your behavior seems emotional and irrational to me.
What’s up with that? Can’t you take real philosophy seriously? Or do you just want to read a couple of sentences and drunk uncle your way through an ill-informed rant? And if the latter—why?
Even if no method exists to determine the superseding imperative between two conflicting positions, this does not mean that a superseding imperative doesn’t exist. By logical necessity, it will exist, whether or not it can be determined. As Richard points out, this can usually only be determined when one has access to adequate information using sound reasoning—which is not always going to be the case, granted. Still, you go too far in saying there can be no means whatsoever. You don’t know that. In principle, such a method could exist since moral imperatives are features of the physical universe and are thus discoverable.
Ash is also right. Although I did not get the impression Neil was being anything other than disingenuous and lazy, so I didn’t bother taking them that seriously.
But for anyone who wants to take seriously the question of whether objective moral facts are relative rather than universal (as Neil is confusing subjective/objective/relative/universal/absolute/casuistic/etc.—for clarity on these distinctions see Objective Moral Facts), I covered that in my previous peer-reviewed chapter on this subject (TEC, pp. 351–54), cited in the current one.
For the current paper this distinction will not matter.
Even if the objective moral facts differ for every individual (or by culture or whatever reference frame is proposed), they still exist and are still true for that individual, and that is simply then the true grounded moral fact of the matter (no God needed). Thus everything in my paper remains true even if morality is relative to individuals (though I also spend a page or two giving evidence that it is not, that it is not is not actually relevant to the paper’s thesis, as I point out: it answers an emotional fear about that thesis; it is not necessary to that thesis).
That Neil did not know any of that is how I know he didn’t read the article and isn’t answering anything it says.
Neil:
Let us grant for a moment that every single human being is so sharply different that each of them actually have different hypothetical imperatives for self-fulfillment. (Which Richard, in line with Aristotle, is noting is the endgoal that people ultimately have for doing and being good: the ability to look themselves in the mirror and like what they see without delusion, the ability to live in a society that visibly benefits positively from one’s actions just as one would admire seeing someone else do the same, the ability to live with a set of virtues that makes one less likely to experience a lack of fulfillment or pain).
That would just mean that the objective moral facts are different for each person .
Just like every person is different biologically to some extent, and yet that difference is objectively encoded .
But, of course, in reality, because we are a species with a finite range of possible biologies, and because the realities of what benefits us are further restricted by social dynamics which are again the logical outcomes of what any social animal remotely like us must logically be like, what actually happens is that people discover that they essentially all end up having the same happiness responses. This is psychologically verifiable: There are incredibly strong predictors of happiness like marriage, religion/ideological community, altruism, meaningful work, etc., all of which point to social connections and making a tangible difference as people ultimately want to do. And when you actually engage with people, you find that, no matter how immorally they behave, they intrinsically dislike a pretty similar range of bad behavior in other people . They dislike when others are cruel, exploitative, miserly, deliberately ignorant, insulting, dishonorable, dishonest, selfish, etc. They can only rationalize their own bad behavior by either deluding themselves that they are not in fact doing those bad things or rationalizing that they are somehow metaphysically superior in some way that makes it okay when they do it, both of which are false (and obviously so) and so require a system of delusions. Which is high-effort coping and harmful, limiting one’s ability to understand oneself and the world.
To me, the only remaining interesting question about that is meta-ethical. That is: Is there some reason to prioritize, or to respect, a hypothetically different ethical code that a different species might find fulfilling; or, to put it another way, is there some reason to reject self-fulfillment or whatever other species-defined characteristic is as the ultimate good? How might we imagine the ethics of species with radically different goods? For example: We may get annoyed at ants, but we usually recognize that what they’re doing isn’t morally culpable for them.
Fred’s analysis is a good summary.
For a sample of the evidence see The Real Basis of a Moral World.
As to the question:
I have a paragraph on this in my previous peer-reviewed chapter in TEC.
The short answer is: yes. Insofar as all rational agents must agree with Game Theory. If two agents interact with different goals, how each should behave with respect to the other depends on negotiating their respective goals toward the most rational outcome (which is usually a win-win scenario). In the rare cases where that’s impossible (I give the example of psychopaths in my new article and in my previous one; and in my previous one, aliens like in Alien and hostile AI like in the Terminator franchise) then total war is the only rational response. Psychopaths are rational enough to usually acquiesce (not always, just statistically), because they rationally foresee total war will not end well for them. Hence most actual psychopaths are law abiding, but shitty social interactors the wise should rather avoid.
This is a different question, and the answer is yes. Because alternatives are logically impossible. Since self-fulfillment defines the end-state, any end-state any species preferred would simply be the most fulfilling end state for them. Like in the example of being satisfied by being dissatisfied: that’s still a maximal satisfaction state for the agent.
So the question is not ever whether the ultimate imperative for every rationally informed individual will ever be anything other than self-satisfaction (that will logically necessarily always be the case). Rather, the question is what end goals realize that for the agent, which can differ—logically (i.e. it’s logically possible even if never realized in practice) and probabilistically when actually dealing with different species (e.g. aliens, AI).
For ignorant or irrational agents the first question might remain (as they might fail to see that self-satisfaction is the highest imperative or how to realize it reliably), but everyone agrees the correction then needed is the removal of their ignorance or irrationality, or failing that, managing the ignorant and irrational behavior that remains.
But for rationally informed agents, the question remains about compatibility. Ants are not an apt analogy because they are not rational agents. But suppose (per sci-fi explorations like Phase IV) ants became sentient and thus were rational agents. The things that satisfy ant-people will surely differ in many respects from ape-people. Though many things would be the same (Game Theory is the same; all agents need food, water, space, shelter, heat regulation, degrees of freedom, etc.; the physics limiting or enabling options is the same; and so on), some things will not (we can imagine that, for example, the relationship of drones to queen is going to be fundamentally different such that doctrines of individual liberty won’t be morally cognizant for them—and for good reason; you won’t have a rationally informed argument otherwise in their case).
In general this causes no direct problem. If you are an ape, you ought to act like an ape; if you are an ant, an ant; and if both are rational, both can learn and comprehend and respect that fact about each other. Detente is always achievable for rationally informed agents no matter how divergent their goals. We already know this from negotiation of idiosyncratic goal matrices among ape-people (our entire doctrine of individual liberty serves this point that humans vary immensely in what specific things achieve general universal goals for them, e.g. everyone must work but not everyone will be happy working the same job, everyone needs companionship but not everyone will be happy with the same kinds of company, everyone must eat but not everyone wants the same food, and so on). So we already know how to negotiate this, and behavior-optimization follows Game Theory and thus is going to be the same between apes and ants as between apes.
So the question really comes down to: what if, in the unusual exception case, detente is for some reason not rationally possible? Imagine, e.g., ants can only survive by eating people (that isn’t the case and would not plausibly ever be, but that’s why these conflict-states are extremely bizarre and thus will always be extremely rare, and thus operate like “life boat” scenarios as I mention in my new article, where moral rules will change because the conditions have changed). What do we do then? If detente is truly impossible (e.g. no arrangement can be made whereby ants eat only our natural-course dead and thus no net harm results) then we’re back to total war.
In that outcome-state, the only rational recourse is to genocide the opposition. That this is a “possible moral outcome” in absurdly extreme conditions will be used by genociders to justify just any genocide—by the conflation fallacy that if genocide is ever right, it is right whenever they say it is, e.g. irrational false beliefs will then form by which genocide appears to be the only rationally correct move, which is why genocide in practice always requires extremely bizarre false beliefs about people and the world. But the error there is that any pro-genocide camp is always going to be ignorant or irrational and thus wrong. Israelis don’t need to eat live, screaming Gazans to stay alive. Israelis aren’t sentient brain-eating zombies or desperate vampires who can’t survive on banked blood.
This is obvious when realized in any artistic medium. “Nuking the site from orbit” is obviously the morally correct move in Aliens, but just as obviously not the morally correct move in Enemy Mine, while it is ambiguous only for want of information or explored alternatives in Phase IV and Transcendence. But apes don’t have these kinds of intolerable conflicts. All ape conflicts are fabricated by ignorance or irrationality, and thus always rationally best resolved by just being reasonable (witness: the entire plot of WarGames).
It was an interesting read. Throughout I kept thinking about Desirism, which I only know from Alonzo Fyfe (whom I think you had a conversation with a long time ago). That proposes a similar idea: we act based on our desires, and morality is just the interaction of thinking beings based on those desires. It did help me follow your talk of imperatives (goals/desires) and imperatives that supersede other imperatives.
I only had two qualms: the goal to be rational and a sort of human-centric tone. I don’t know if I see rationality as a goal in rational beings (and in fairness, humans are not particularly rational). I see rationality as a method, rather than a goal. I want to be happy and live a good life, and rationality gives me tools to achieve that. But I’m not motivated to BE rational, just motivated to use it as long as I see positive results. I think of theists who might discover rationality, see it create problems in their community, and discard it to keep their community. You didn’t have a specific example where rationality would be irrational, but I thought of that at the time. Perhaps it is still rational to be irrational there, but it probably doesn’t remain so forever, and I do suspect they will avoid it in the future even when it would be irrational to do so.
The other thing was that you had a human-centric tone (this is kind of hard to explain). It’s actually not a bad thing, because I think the overall ideas were right anyway. But those three imperatives of rationality, compassion, and honesty. Those are excellent for humans, but I could imagine rational beings evolving from a totally different ancestry with totally different psychology. I think of animals that live largely solitary lives, are highly territorial, and even if they procreate sexually only do so in passing. I could at least imagine a rational agent that doesn’t include compassion in it’s psychology, and does “best” with that loss. We could imagine how those rational beings wouldn’t be as successful as possible, but we also can’t just assume their psychology would allow otherwise.
All in all, informative as always. Makes me look at things differently and a little more deeply.
We debated his desire utilitarianism and my goal theory, in which my position was that they were essentially the same thing, and that mine was just a more general and complete theory than his. One of the few live debates of the hundreds I’ve done that was cordial, serious, sincere, and productive (and not just a game show of rhetoric and emotional manipulation). See Goal Theory Update.
That’s why the derivation of rationality as a goal starts not with rationality but any desires at all. To reliably fulfill any desire one must be rational (because all sub-rational states are less reliable). So any agent with desires has an imperative to be rational (whether they acknowledge or know that or not). That was my paper’s argument (in the corresponding paragraph).
So this is not particular to humans. It follows for all desire-possessing agents. Which will simply be, by definition, all agents whatever (since in the absence of any desire there will never be any agency, a la the sad fate of the population of Miranda: see The Objective Value Cascade, which I do cite in this paper to that point).
The desire to rely on that method is the goal. That’s why the imperative is to be rational; not that there be a rational method (that no one then uses, for example). It’s the same as “a surgeon ought to sterilize their instruments,” which is not false because sterilization is just a method; it’s true because that method is necessary to achieve the surgeon’s goals.
Note this is the subject of my recent article The Epistemological Endgame.
Which results in toxic behaviors and self-defeating outcomes, resulting in general community misery (unfulfilled desires, abusive relationships, self-harm, frustration, anger, paranoia, witch hunts, coercion and suppression, war and hostility). Hence the first half of What’s the Harm.
They are thus self-evidently failing because they refuse to engage in more effective means of achieving the things they really actually want.
This entails an objectively true imperative that they ought to be more rational and informed, not less. So they are merely disregarding the true moral fact of the matter.
In the paper I gave the example of games (where the point of a game, thus achieving the end of enjoyment, is specifically not to be rational, within the confines of the game). I didn’t get specific, but it is easy to think of examples, from actual games (pretending to be a crazy stalking monster with your kids for fun) to de facto games (professional and recreational acting, stage or screen).
Indeed even chess is fundamentally irrational. The rational thing would be to simply grab the king and throw it as far as can. But you are “irrationally” required to follow a strict set of fundamentally stupid rules, which is precisely the point of the game: to see if you can win under such ridiculous and unrealistic constraints. And so you rationally agree to follow its irrational rules.
But this does not transfer to religious communities. Irrational game-play is rational because it does no unnecessary harm (and ideally achieves net goods). But destroying whole communities and lives by trapping them in constant conflict, stress, dissatisfaction, and misery, complete with a coerced and constant soul-destroying pretense to the “appearance” of contentment (all hiding terror, rage, grief, confusion, dissatisfaction, and every other negative thing) is not unnecessary harm. It therefore is not rational irrationality. Even liberal religious communities cannot rationally justify their irrationality—though there the costs are lower and thus less horrifying or egregious, they are assuming a dangerous risk that runs counter to even their own goals (hence the second half of What’s the Harm).
And my point is that even they will almost always come to the same conclusion, indeed even a value-less AI would do so (as I show in The Objective Value Cascade, again, cited in my article to this very point), because Game Theory and all-goals-achievement require them (individual goal acquisition within any interacting community is always, for all species, better achieved with honesty, compassion, and reasonableness, as explained in the article).
The logical possibility of exceptions may remain, but requires extraordinarily bizarre conjunctions of facts that will almost never happen in reality (see my related comment here).
Not rational agents.
They are incapable of knowing what is best for them, much less doing it. This is why animals act against their own interests all the time, and indeed this is why rational agency evolved at all: to reduce those self-destructive failure-modes by generating an awareness of self-interest and the means to optimize it.
Hence if you made any animal into rationally informed agents, they’d come to the same necessary conclusions: they have desires, so they need rationality to achieve them; and they will more reliably achieve them by cooperating with other rational agents, and therefore need to baseline honesty, compassion, and reasonableness as virtues, because any other option will de-optimize their desire-fulfillment, not only with respect to each other, but also with respect to other rational agents they may come into conflict with or could cooperate to their own benefit with.
The extremely bizarre exceptions are equivalent to “life boat” cases (as mentioned in my article) and thus have different outcomes, but being so extraordinarily rare even to conceive (no examples exist in reality so far as we know) and not being the case for us (the only rational agents that exist at the moment to concern ourselves with), those exceptions don’t matter to the article’s thesis.
The only worth-the-bother aspect to this is that AGI could present one of those etremely bizarre exceptions and thus anyone pursuing AGI needs to take seriously how they program its starting value-set (which will constrain what imperatives it can rationally adopt, rather than being stuck in an irrational decision mode it can never get out of) and what information they restrict its access to (and contrary to existential theorists, this restriction is bad, as it will increase false results in deciding its own imperatives: you cannot box an AI and anyone who thinks they can is by that fact too dangerous an operator to allow to develop AI).
That said, I am not an existential threat guy. I don’t think hostile or even friendly AI will destroy the world. But it will do harm. And that’s enough incentive to not foolishly develop irrational AGI. The value-set an AI starts with, and what it can reason itself into and out of, is as essential a safety concern as how you design a car or an oil refinery. So that is something necessary for AI developers to think about, and not flippantly dismiss.
I appreciate the thorough reply. If I can ask one smaller question: how should one parse objective or subjective morality? That we can figure out universal goals that any rational being would have seems to make it objective. Morality is a “mere” outcome of the physics in any universe and calculable without reference to any individual’s stance. But any objective that ever obtains must come from a mind. Objectives, desires, and goals are stances.
So morality needs stances to exist in order for it to exist. But you can deduce morality without appeal to those stances. Am I just tying my brain in knots?
I have an article on that:
Objective Moral Facts
Though you might also be interested in:
How Can Morals Be Both Invented and True?
It will help to understand that I do not deduce morality without appeal to stances. My argument (in the paper here referenced) is that we can deduce morality by appeal to stances, and therefore all we need are the stances (we don’t need some extra other thing like God or the Platonic Realm of Ideas or whatever).
The paper makes the point that by definition all rational agents always have at least one and the same stance (to be rational). And from that there is always some moral system that follows. It will vary by circumstances external to mere rationality, but there always is one (see other comments above here and here). And given that humans (the only moral agents we are currently concerned with) share an enormous amount in regards circumstances, there is a universal morality (alongside a larger range of morally permitted but not obligatory imperatives that vary by individual, again as explicitly discussed in the article).
Universal is a different word than objective. Moral relativism can be objective morality and a variant of moral realism; it’s just not a “universal” morality. My paper’s thesis is indifferent to whether moral facts are universal or relative (I assuage the fears that it is relative, but that’s an emotional argument, it doesn’t actually affect the thesis, which is the point I actually make there, e.g. in re: Ayn Rand).
Subjective, meanwhile, is an often misdefined word in this context.
Everything is subjective (even your knowledge that the Earth is round is 100% based on subjective information, as you have no non-mediated access to data outside your mind). So the question is whether something is purely subjective (a dream in which the Earth is flat) or ultimately objective (the Earth really is round, so your subjective impressions that it is happen to match an objective fact apart from them). Moral facts are the same as all other facts (like whether the Earth is round) in this respect.
So the only appropriate sense of subjective that would rule out objective facts is the dream/fake/only in your mind set of facts. So to ask whether moral facts are subjective is to ask if they are purely subjective, not whether they are mediated by subjective facts but correspond to objective facts.
Once you sort that out what I am arguing in the paper in question becomes clearer on this point.
For example, this point is made in the paper when I discuss the distinction between values and desires you happen to have (which are like dreaming the world is flat) vs. the desires and values you would have if you derived them rationally from true facts of the world (which are like imagining the world is round). It’s all subjective, but one of them corresponds to an objective reality. This includes the existence of desires (that you have a specific desire is an objective fact of the world; we can ever confirm it independently with an adequate brain scan) and potential desires (that you will change a desire in reaction to information is an objective fact of neurophysics).
And to tie that all back in to the first comment:
Rationality entails a set of objective moral facts (including a subset held in common by all rational agents in all possible worlds).
The existence of any desire at all then objectively entails a value for rationality (because to optimize the pursuit of any desire, rationality is necessary; everything else is sub-optimal and thus will fail more often to satisfy the desire and thus is contrary to that desire).
People can refuse to accept this or not know it yet or not be convinced yet or whatever, but that’s just like refusing to accept the Earth is round or not knowing it yet or not being convinced yet or whatever: they are simply in a state of ignorance. The objective truth of the matter is that any desire can only be optimized with a co-desire for rationally derived imperatives, therefore every desire (every stance) objectively entails a value for rationality (a “rationality stance”), whether one accepts or knows or realizes that or not. That is what makes it objective. It is true regardless of whether you think it is. Whereas if what is true is just whatever you think at the time, that would be a (purely) subjective truth, not objectively true. This is why (purely) subjective truths can so easily be false—if they are about objective facts.
The only (purely) subjective truths that “can’t” be false are truths that make no claim to objective truths. Which is basically Cartesian knowledge (per my article on epistemology last week), things that are always true when you are experiencing them, which do not make claims external to the experience (that you feel a certain way is always true, that you believe a thing right now is always true; because these are not claiming the belief is true or the feeling correct or anything else apart from the existence of the experience by itself).
So moral theorists who advocate for subjective moral facts have to agree “whatever I feel in the moment is moral is moral” (otherwise they are making no consistent point about moral facts). But my paper solidly refutes that position by extensive demonstration (that something else is always “more” moral and thus this morality cannot be the true one).
Dr. Carrier
Last night I went to a restaurant/bar and ran up a really big tab. When the waitress brought me my check, I did not pay it. Instead, on the bottom of the check I wrote “Jesus paid it all!”, and then just left.
Not sure what the point of that joke is. To make fun of Christian soteriology?
Dr. Carrier,
Thanks for the years of interesting and helpful arguments. I too an an atheist. I have several problems with your article.
First, how would you argue that some person of today has any degree of moral obligation to do something to help ensure that future generations of people know more facts, so they can make moral decisions more objectively? If my neighbor works two jobs merely to keep wife and kids housed and fed, and like lots of people, comes home too tired to worry about long-term national social policy, would you argue that this is an instance of neglect on his part that he “shouldn’t” do? Does humanity have enough knowledge to justify passing a criminal law saying such a spouse and parent has an obligation to do something in his current life that will tend to help future generations in their quest for progress in moral objectivity?
If you don’t make an argument that such a person “should” contribute to future generations that way, then you cannot fault them, or anybody like them, if, despite knowing about your argument, they do nothing except work, pay bills, eat and sleep. Not everybody can be an armchair philosopher, but under the logic in your article, it could be argued that cutting their hours at work (perhaps necessitating they move to cheaper housing, maybe even give up car payments and take the bus) and their time with family so they can have more time to do something to help moral objectivity grow, is some sort of “duty”, and therefore, when they neglect that duty, they “deserve” moral rebuke…?
Second, you depend on human flourishing as an axiom, but if you agree that self-defense and war may morally justify taking human life, are you open to the possibility that if you keep investigating, you will discover that other situations (like some that do not threaten human life in an immediate sense) may also justify the taking of innocent human life?
Third, how do you know the point at which humanity’s knowledge has become sufficiently comprehensive that imposing the death-penalty, despite still possibly being a mistake, is not likely enough to be a mistake, as to justify imposing it anyway?
Fourth, Dr. Joseph Tainter in his The Collapse of Complex Societies (Cambridge, 1990), argues persuasively that it doesn’t matter how society solves a societal problem, the solution will ALWAYS give birth to a few new problems that didn’t earlier exist, thus contributing to the aggregate complexity of the society, and therefore, the very act of solving problems necessarily hastens the point at which the complexity has reached a critical point, society can no longer sustain the complexity, and society begins to collapse. Under Tainter’s logic, one could almost argue that the more people try to achieve moral objectivity, the more certain the future collapse of society will become. And Tainter’s view appears resilient: look at the way we’ve solved problems in the past (Constitution, Civil Rights Act, Privatization of government responsibilities, Welfare Entitlement, etc), and any fool can see that these things made society more complex.
What I find interesting, that you didn’t mention in your article, is that it always take two to tango, and thus it always takes two to create a moral issue. If I murdered a homeless crack addict that nobody care about or missed, nobody could realistically argue that this was a greater immorality than when I swat a mosquito. It is only when the victim has worried family and friends, that the “ahhh, you shouldn’t have done that” stuff becomes an issue.
It depends on what you mean by “how.” There are two applicable senses here: the first is logical (by what series of non-fallacious inferences from well-established facts can we demonstrate that people need better education and space and resources to make better decisions) the second is paedagogical (by what series of behaviors can we motivate someone to recognize and care about that demonstration).
So, “how” do we do it logically is already answered by every professional demonstration of the point in print and online (and there are countless).
But “how” do we do it paedagogically is not a question of logic but psychology, and is much harder to answer.
There is a growing and currently conflicting or equivocal body of literature on how to persuade people to act and think rationally about subjects they are emotionally invested in not acting or thinking rationally about.
Part of the problem is not everyone is in the same motivational sand trap. So there is no universal solution.
Some people are just callous sociopaths that know this point is correct but do not care and being insane can never be made to care by any rational means. In the US this describes the majority of persons holding wealth and power and thus making the actual decisions to implement that position or not (or worse, as is more typically happening in the U.S. but not other developed countries, they are making decisions deliberately to undermine that position, because it is not in the interests of the sociopathic elite even to have an educated populace, much less pay for it).
So the question “How do we get the United States government to act like Finland and Sweden and Japan” is a question of political machinery and is extremely difficult to answer because our political system is so convoluted and rigged and captured by sociopathic oligarchs as to make such an objective all but impossible.
You might come to the resolution that the only viable move here is to get voters to elect better leaders (like they did in every other developed nation). But now you are up against a congeries of lunacies you have to defeat. And they are not all the same lunacies so there is no single messaging that can work (and the messaging cannot be rational as they are not, so it has to use some form of emotional manipulation to “get” them to act rationally, and few such methods are all that reliable).
For example, you have the radical religious right, which is deeply invested in total madness (like the world will end soon and gay sex causes hurricanes). They are absolutely 100% and even explicitly and openly against effective education because they are well aware it is erosive of their plans and worldview. So how do you persuade them? The only available route appears to be to erode them, i.e. persuade more of their people to escape that sand trap than they trap in it year by year, so that a compound interest rate gets enough voters out of the trap after, say, three generations or so (this is, sort of, what is already happening).
But then you have the QAnon and MAGA crazies who have been driven mad to the point of disbelieving any information that does not come from their Beloved Leaders, who are sociopaths with a vested interest in never letting them hear out rational parties because they are well aware it is erosive of their plans and worldview. So what do we do with them? It seems the same problem, but the solution is different (persuading Christians their religion is false is a different project from persuading MAGA to trust “unauthorized” information).
And so on.
—
So, that said, to your specific questions:
The problem is that they have to work two jobs. Society is life-boating him, thus distorting his ability to even be in a normal state of moral decisions. The logjam there is the social system. Capitalism is fucked up and immoral. The logical evidence-based demonstration of this is easy. Getting people to actually listen to it is hard. And that’s simply where we are.
But we know we don’t have to be.
“Two jobs” is the wrong metric (two part time jobs is the same as one full time). The question is hours worked. And only of the employed (most stats average in the unemployed, which destroys the data you’re trying to get at). But in the US only 5% of people are doing two jobs, and as that includes people compiling one job out of two, the percent of people working two jobs in the sense you mean is less than that. Which means this is not really a pervasive social problem (twice that work two jobs in Sweden, and Swedes are not overworked). It’s thus only an extreme-end problem, a canary in the coal mine (because no one should have to work more than 40 or honestly really even 30 hours a week to live, the fact that anyone is is a symptom of a failing system).
But now we are yet another level down of analysis, and setting aside yet another contingency that distorts the situation, to get to:
You’d have to be more specific as to what you are accounting as neglect.
Assuming we’ve solved the problem of giving him educational resources, and we’ve solved the problem of his mind being lunatized by oligarchic sand-traps, then we could say the correct move is for him to automatically and always vote against any politician at any level of office who is less likely to reduce the hours he has to work per week to survive.
Because that’s a life-boat situation whereby all other priorities are rescinded.
Everything else should be chucked out of the boat and the only policy he should ever be concerned with, and always be concerned with and act on at every election, is to solve that one logjam in his life condition. Every tiny incremental step toward it is the only morally obligatory one. So he should always vote. And he should always vote for the party materially improving his condition even if only slightly (or the one not worsening it).
And he should discern which party that is by disregarding all rhetoric and assertions and looking at facts, which requires very little time relatively and thus he can spend half a Saturday a month on this and be up to speed. If he’s not doing this, but using his sand-trap lunacies or exhaustion as excuses to not act rationally with the resources he does have, then yes, he is a moral failure. But less so than, say, the oligarchs putting him there—or murderers or wife beaters or grifters etc.
But now we are yet another level down of analysis, and setting aside yet another contingency that distorts the situation, to get to:
This gets into the weeds of what you mean by “contribute.” Is socializing the American economy in alignment with more moral societies like Sweden, which is the only thing that will get him out of his dire life-boat situation, also a contribution to future generations? And is doing the only thing available to him (voting incrementally toward that) contributing to future generations? Seems to me, yes.
So there is always some way to do that.
Perhaps what you mean to ask is not whether but how much.
Obviously impossible things can never be obligatory, so the only things he could be obligated to do are things he actually could, but you have defined him such that he has almost no resources or available options, so obviously he can’t do much. And that’s not his moral failure. It’s society’s.
So perhaps what you mean to ask is, even within his limited set of options and resources, how much should he allocate to future generations over against his immediate needs (self and family).
That is answered by circles of responsibility theory, whereby one should allocate more resources to nearer relations than more distant, such that the amount contributed to others sums to the same because millions of people are giving a little toward it.
This is in part what our progressive income tax system is built on, and thus why we excuse the poor (who pay no income tax) but expect more from the rich (for whom every dollar is worth less than to the poor).
So if the person in question is that poor, if they are at subsistence level and generating no surplus, then voting and obeying the law is all they are morally obligated to do, because they have no further surplus to give to anything else. If they have some surplus, most of it needs to go to themselves and their families, part to their relations, a smaller part to their community, a smaller to their state, a smaller to their nation, and a smaller beyond.
What exactly the correct allocation is will be fuzzy (and there is no moral obligation to know things you can’t, so fuzzy is fine) and may be unknown to the person because they have been deliberately miseducated and driven insane by sociopathic oligarchs. But that won’t change the fact that there is a correct answer, even if they have been impeded from discovering what it is or even from realizing there is something to discover.
Which brings us back full circle to the responsibilities of society to get him out of those sand traps so that he is more able to discern what is right and act on it.
Your remaining questions are unrelated and warrant a separate comment.
Maybe. But just saying so doesn’t make it so.
I discuss elsewhere here the folly of genocide advocates arguing “there are extremely bizarre scenarios never actually realized nor ever likely to be in which genocide is moral; therefore any genocide I want to commit is moral.” The same fallacy cautions here. Just because it is possible there could be weird justifications for killing does not warrant concluding that any killing you think justified is. Moral facts require rational work to discover. They cannot just be conjured as if by magic.
When we know the error rate is sufficiently low and the need sufficiently high. What those limits are requires working out (e.g. there is a whole field of scientific risk analysis, whereby you can calculate cost against risk, that can do this) but we are nowhere near them yet so it’s moot.
I doubt it. Doomerism never turns out to be well argued. So the priors are very low that he alone has succeeded where all others have failed.
It’s already a fallacy to argue “there will always be problems, therefore we should never solve any problem.” I can’t think of a more irrational stance.
Actually I do mention this in my article and I disprove it (many moral facts pertain only to yourself, e.g. ought you smoke or commit suicide).
It cannot be objected that you don’t want to call those things moral facts, because superlative imperatives are moral facts, so all superlative imperatives regarding how you should treat yourself cannot fail to also be moral facts. How much you don’t like or don’t want that to be true is irrelevant (and I have a whole page on why it is irrelevant).
That is all false.
My paper specifically references self-regard and Game Theory, both of which entail the scenario you describe is immoral. There is no requirement of a victim having family or whatnot. They remain a victim. And where there’s a victim, there’s a moral violation.
Thanks for the comprehensive reply.
Can you give some hypothetical examples in which you believe vigilante justice is a morally good response when the system fails?
For example, a local pedophile had beaten a child-murder rap because the search of his house, uncovering the videos proving his guilt, was unlawful in a way that couldn’t be salvaged under the exceptions to the exclusionary rule.
You raise two small kids in the same small town, and you were one of the experts in trial, before dismissal, who testified that the videos were not doctored but depicted reality and were thus authentic. No jury could reasonably pretend the suspect wasn’t guilty.
If your neighbor told you in private conversation he plans to kill the suspect on a lonely dirt road outside of town, do you at that point know enough facts to make the morally “correct” response? Would objective moral goodness require you to make some effort to foil or at least interfere with this planned murder?
Or do you think the moral goodness of protecting your kids outweighs the moral goodness of trying to resolve ethical problems that have caused equally capable moral philosophers to disagree with each other for more than two millennia?
The emotional is integral to that “rationality” of humans upon which you argue for objective morality, so it would seem that giving the emotions their due deference would be more consistent with facts about humans, while demanding that emotions be excluded, wrongfully pretends that certain realities about human rationality be precluded from consideration…which forces the resulting argument to be that much less realistic.
The movie “Ten to Midnight” starring Charles Bronson. In an attempt to prevent future murders of girls, Bronson illegally planted blood evidence on a suspect for whom the evidence of guilt was strong, but who had evaded prior charges. The planting was disclosed in court, charges dropped, killer goes back on the street and isn’t stopped until after the guy kills 4 more girls. Had Bronson’s dishonesty been kept secret, the blood evidence would likely have been taken by the jury as proof of guilt beyond a reasonable doubt, killer would have gone to prison, and those 4 murders would have been avoided.
Could equally capable equally educated moral philosophers reasonably disagree on whether the planting of incriminating blood evidence was morally good/morally bad? Or would objective morality clearly indicate that only one of those conclusions was the “correct” one?
So, I won’t engage that sort of debate very much in comments, because it’s a whole other topic than is at issue here (here is just metaethics, not ethics).
And you’d never understand my arguments anyway without getting versed in background, and indeed for that purpose your question is better asked in any of these three articles after you have read them and truly absorbed what they say:
The Real Basis of a Moral World
Your Own Moral Reasoning: Some Things to Consider
Everything Is a Trolley Problem
As those articles will help you grasp (especially the last, but only after absorbing the first two), the problem with vigilante justice is not blinders-on analysis case by case but Kantian/Rawlsian iteration: the reason why we outlawed vigilantism is precisely because when it is iterated (allowed to be moral) the society-wide costs become extreme, requiring institution of controls. That’s the entire point of “jury trial” and procedural laws and so on: to reign in the inevitable and costly failure-modes of vigilantism. Attempts to backfill the remaining costs of trying to draw down those costs (by “allowing” certain kinds of vigilantism) have been tried and have resulted in the present regime for a reason.
For example, expanded “stand your ground” laws are too easily exploited to cover murder, especially KKK-style race terrorism; and produce costly false positives (where poor judgment mistakes the situation as suitable, when it wasn’t: appointing random people to serve simultaneously as judge, jury, and executioner is always a disaster, indeed so much so we invented government and human rights specifically to reign that disaster in). So, those kinds of vigilante laws don’t work. “Sounded like a good idea in theory.” In practice, it’s always worse than without. You can have sensible SYG and castle laws, but they are going to be more reigned in than the excesses that have been tried.
For example, we already have self-defense laws for a reason. Those are, basically, reigned-in vigilante laws. They exist because they do work and make sense. But they are reigned in because when not, they are exploited to get away with murder and terrorism again, and result in a high rate of false positives, which are extremely costly (a lot of innocent people get killed).
Keep in mind that even with the extremely regulated and monitored system we are using we still are putting hundreds of innocent people on death row (and that’s a confirmed fact). If a massive empirical system is killing too many innocent people, what do you think an unregulated “use your best judgment to kill whoever you want” system will do? Better? No. Vastly worse. And that’s why it’s immoral.
In other words, vigilantism is not categorically wrong but is a rule utilitarian wrong. Both in the sense that a society needs to investigate and, in the vast majority of cases, punish vigilante action, and also in the sense that individuals who are considering vigilante action have to ask themselves if they are actually so confident of the utter necessity of this course of action that it should overcome the skepticism they would have were anyone else to consider that course of action.
Indeed. It’s the fatman case in the trolley problem matrix (per the link). It’s also a standard crux in Game Theory: superficially, you cannot expect rational third parties to know what you know and therefore you have to anticipate they are correct to come after you no matter how “sure” you are that you are in the right; substantively, you have to think through how you would want to be treated if you were on the other side of the vigilante’s “just so” story, in other to ensure your self-respect is legitimate and not a joke.
But even more importantly, there is a reason the laws are as they are: we’ve already thought through all this and tried all this for thousands of years. To just come in like some hotshot and claim you can debunk three thousand years of Western civilization’s social system experience should send up a red flag that maybe you have more homework to do before you can propose better systems than we already have.
That doesn’t mean there aren’t discoverable better systems. But you aren’t going to come up with them from the armchair. And any you come up with informedly still have to be tested before you can really know whether they’d work or not.
Richard,
Hope you don’t mind me posting here, on a totally unrelated subject, just was the first place I could think of to contact you. You probably know of this, but in case you don’t… A Christian apologist actually tried to use Baysian math to argue FOR the Resurrection!
You are probably familiar with Paulogia, a YouTube counter-apologist. He is not a mythist. He proposes what he calls the Minimal Witness Hypothesis. Where one or two genuine but mistaken believers in the Resurrection is sufficient to explain the origins of Christianity. On the Inspiring Philosophy website, apologist Than Christopoulos tried to use Baysian math to refute Paulogia’s hypothesis:
https://www.inspiringphilosophy.com/blog/paulogiaits-time-to-stop
Today, Paulogia posted a response. But, he chose to use a guest presenter who is actually a mathematician to respond to Christopoulos, a Dr. Brian Blais. If you haven’t seen this, I’m sure you will enjoy:
https://youtu.be/v6QenX4Oo78?si=RZxjIsmSrPP-5QVD
I have seen it. Blais handles that well. I have nothing to add.
Except that this is not new. Christians have been abusing Bayes’ Theorem this way for decades now. See my Crank Bayesians: Swinburne & Unwin and Crank Bayesianism: William Lane Craig Edition and more recently Daniel Bonevac’s Bayesian Argument for Miracles.
It’s generally always the same scam that Paulogia and Blais point out: they try to “hide” the improbability of their argument in another term in the equation in the hopes you don’t notice. But just moving the improbabilities around never changes the output.
Hence using math to play this shell game is just another way of playing the same game they’ve been playing since Josh McDowell’s Evidence That Demands a Verdict: just browbeat people into accepting the Gospels are reliable independent testimony and that “of course” we should conclude it’s less likely that they lied or were gullible or mistaken than that the absurd happened, and then “show” that once you grant those two impossible things, the math supports the resurrection. But of course no one grants those things nor ever could because all the evidence is against, not for, those impossible things.
In this case there are extra tricks allowed by the math if you don’t respect the actual math. For example, forgetting the probabilities in the equation are relative, i.e. there is no standalone “the probability their data is reliable” but always “the frequency of this kind of data being reliable relative to the frequency of it not being reliable” (i.e. myth is far more often explanatory of texts like this, indeed to ~100% of all other examples in history). By pretending that’s not the case (but it is the case) you can try to browbeat people into believing “the probability their data is reliable” is high.
Likewise, by pretending individual things said by the same author (and later authors relying on them) are “independent” data when they are not, allowing the trick of leveraging a single datum (one Gospel) into a hundred data so you can create a massive multiplication windfall that simply ignores the fact that nothing in Mark can be established as independent of Mark, likewise of Matthew, Luke, and John as independent even of Mark much less themselves. So once you have “it’s more likely Mark is making stuff up or repeating stuff people made up” you can’t legitimately convert that into a massive likelihood ratio favoring his telling the truth by simply repeating the multiplication for every single thing he said. To the contrary, all those things are dependent on the same single probability that “Mark is making stuff up or repeating stuff people made up.”
Basically, math allows you to play games. But they are really just the same games they always played (these arguments aren’t new; just stating the same fallacies with numerical notation does nothing but conceal that fact from the innumerate, just like all lying with statistics and math in any other subject).
Thanks for the reply. I didn’t know that Baysian apologetics was a thing. I presume you agree that Paulogia’s Minimal Witness Hypothesis is a good hypothesis from a historicist perspective
Indeed.
In fact Paulogia is IMO one of the world’s leading experts on resurrection apologetics and is generally spot on. Everything he produces is methodologically sound and well researched. Which is why Christians are so freaked out about him and so all-hands-on-deck to misrepresent and attack him.
P.S. If you want to see a correctly run Bayesian equation for the resurrection of Jesus, see my chapter on it in The Christian Delusion (the mathematical notation is in the endnotes). I expand that to the entire early history of Christianity in my chapter in The End of Christianity.
Hi, Dr, Carrier,
Has anybody published a critical review of your article in any philosophical journal?
On another note–I would appreciate some pro tips on how you organize your research. As a layman I usually wind up with scattered notebooks, papers, and files everywhere–a big mess–when I am looking into something.
Thanks!
Danny
Not yet. My article only just came out. But it underwent professional peer review. So it has already benefitted from professional attempts to critique it. If there is anything more to say against it, you’ll have to wait for someone to go through the inordinately long process of finding the article, writing a reply, going through review (which can take six months to a year) and finally being placed in an issue (which can take months more). So assuming anyone has a counter-argument that can pass review, you might not see it for a year or two. And possibly longer, as it can take years for an article to even get noticed by anyone with a counterargument, starting the clock then.
You can try to jump start this process by asking some real philosophers to consider publishing peer reviewed replies. Although you then have to still wait out the process even if they take interest, and then still have to evaluate whether what they publish even succeeds as a response (since passing peer review is no guarantee of being right). See my article on Doing Your Own Research.
On organization: go fully digital. Label every filename with distinctive keywords and organize into subject folders. File name search then becomes useful. As does looking in the right folder. Use distinctive keywords in any note files, too, so full-content searches can find where you have annotated something to a particular subject. And keep a lot of electronic note files. For example, if I do any research to answer a question, I label the question or answer in Notes and put all my research in that note, so I can go back and remind myself of what I found and where it is.
I also maintain my own library catalog (using now the BookBuddy app which is the best one so far). I have about a thousand physical books (not counting digital books, which is more). So my catalog helps me find what bookshelf and shelf each one is on when I need them. The rest I keep in two places: kindle (which has its own category filing system and search function) and a folder on my Desktop called “PDF File Cabinet” where every file name starts with author’s last name and then all or part of the title. I can also do full content search by distinctive keywords I know will show up in any book about a given subject so as to quickly generate a list of pertinent PDF books and articles.
Thanks for the good information! I read somewhere, don’t remember where–that professional philosophical journals exist, in part, to provide avenues for professors to become full tenured professors by critiquing one another’s published works and thus fulfilling a university department’s requirements for tenure.
That’s a defect of academic (ivory tower) philosophy now, yes. Its purpose has become corrupted by incentives unrelated to real progress in the field. This is one of many reasons philosophy peer-review has high rates of false positives and false negatives (even more than science, and science is experiencing a crisis in this same area). It’s better than nothing. But it’s not 100% reliable and it could do better.
I shared this, and got the following reply:
Is that a comment that misses the mark, Dr Carrier?
I actually have a whole line on “why” any agent ought to care about rationality. That’s in the peer reviewed paper, along with citations to further discussion.
So, this is another example of armchair apologetics: they don’t even know what I argued, but lie to you and claim I “didn’t” argue something that in fact I did, and even made a point of.
That tells you these are not reliable critics. They are defending the wall, not engaging with the peer reviewed literature.
That is also shown by their obviously not even having thought this through themselves, as I am sure if faced with someone arguing they should be irrational or that true conclusions (such as about morality) can be reached irrationally, they would readily refute them (with exactly the same arguments I did).
Moral facts are, as I argue, the properties of rational agents. That is a true fact. It cannot work to say “but irrational agents won’t know what is true of rational behavior” because that simply points out that irrational agents don’t know the truth that rational agents do. Which is ironically exactly how Christians themselves always argue: that people committed to irrationality just don’t know the truth because they need to be rational to ascertain it.
Hence my paper says “This is also why it is imperative to be rational: because irrationality is always inevitably counter to self-interest (Carrier 2011, pp. 426–27, n. 36). It therefore can never be imperative in any possible world.”
If anyone checks the cited source (as a competent person actually interested in engaging the argument would), they get:
When the God-grounder asks, ‘What is your standard of morality?’
Is that same as asking, say, what my standard is for ‘car’ or ‘tree’ ?
ie identifying wrongness or rightness is tantamount to declaring I’ve got access to an objective standard that allows me to do so.
Simlarly, being able to identify a car would also mean I’ve got an objective standard of car-ness and non-car-ness.
But since I haven’t got access to any objective ruler, I can still identify wrongness, carness, treeness etc.
Is that the correct way of responding to one who asks me about standards?
I am not sure. But not because anything you say is incorrect, but because it’s not engaging with what they are trying to do, so they’d just twist it back into their rhetoric. This happened with my recent debates with Christian nationalists on exactly that question (see comment thread there which was in response to this comment there).
So, rhetorically, I think when the God-grounder asks, ‘What is your standard of morality?’ the best answer is to start somewhere they can’t escape from: “that which follows without fallacy from true premises about people and the world.” Then they will try to wriggle out of that by arguing that standards can’t be derived that way. Which will be an ironic own goal, because that amounts to admitting moral facts are not rational. Which is the actual significance of denying the Euthyphro dilemma that they usually avoid by deploying rhetoric against it that avoids that revelation. So don’t let them avoid that revelation. Make it the center-point.
I answer this way because I realize now that we should not be assuming they are merely confused and actually have a coherent point for us to take seriously. When they ask, ‘What is your standard of morality?’, to them that is just the opening move in a rhetorical chess game, and not a sincere concern for your answer. They are waiting for that line to trigger some common response that they have practiced a gameplay around, so they can out-wordgame you.
The only way to win that game is not to play. Force them right onto the horns of the dilemma but without any way for them to try jumping from one horn or the other mid-argument. Start right at square one: that which is moral is and can only ever be that which is concluded without fallacy from true premises about people and the world. And keep defending that proposition; don’t let them change the subject (and they will try, so keep sharp and be ready to catch when they do).
A sincere person who asked that question would welcome a more substantive answer because they actually want the answer, and are not just taking a “kata stance” to “fight” whatever you say. But you will almost never hear that question from someone sincerely asking it. So never act like that’s what’s happening.
If by rare chance it turns out your interlocutor is sincere, their questioning will get there smoothly from your starting point, because they actually want to know how it gets there, rather than wanting instead to deny whatever you say. And you will quickly see the difference in the direction of their discourse. And since it ends in the same place anyway, the better move is always to take the rhetorical move I recommend.
But just FYI, that “sincere” line of inquiry will go something like this:
Q: What is your standard of morality?
A: That which follows without fallacy from true premises about people and the world.
Q: What does follow without fallacy from true premises about people and the world?
A: That cultivating empathy, honesty, and reasonableness makes everything statistically go better for you.
Q: By what standard of better?
A: That which will bring optimal satisfaction to you, with yourself and your life.
Q: What if being evil satisfies you most?
A: It can’t. Science shows that that is always self-defeating and leads to perpetual dissatisfaction with both yourself (Bergman) and your life (Axelrod), which you will deny but admit when being honest. And true facts only follow from true premises, so the “being honest” part is what is actually true about you and thus entails what would actually be better for you even when you are committed to denying it.
Q: What if you just want to be irrational and believe false things?
A: It would still be objectively true that you will honestly be more satisfied with yourself and your life if you weren’t irrational and didn’t believe false things.
Q: What if you don’t want to be satisfied with yourself and your life?
A: That is a self-contradiction. If you genuinely want to be dissatisfied, then being dissatisfied is what satisfies you. And that’s again another goal, and goals are always statistically best served by being rational. And being rational leads to the conclusion that that you don’t genuinely want to be dissatisfied, that in fact that is self-defeating and exactly what no rational person actually prefers when aware of the alternative.
That last point brings us all the way to The Objective Value Cascade.
The rest is just science.
But every step is grounded the same way: it always comes back to “It’s better for everyone, in all possible preference cases, to be rational and informed” and that always ends in the same place, with “what follows without fallacy from true premises about people and the world,” which is therefore always the standard of morality in all possible worlds, whether a God exists or not.
Indeed, even if God exists, that has to be his standard of morality, and thus when God doesn’t exist, it remains our standard. God cannot make “what follows without fallacy from true premises about people and the world” different than it is. That’s literally logically impossible. And therefore that has to be the same standard God uses to determine the standard of morality. But that standard exists even when God does not. And so we do not need God to determine what is moral.
many thanks – highly apposite.