In the movie Serenity, the crew of a spaceship far in humanity’s future discover the lost planet Miranda, where they discover a dark secret: that a government drug used on its population to make them docile and compliant, actually removed all desires of any kind, with the result that everyone just sat at their desks, unmoving, and starved to death, wholly disinterested in anything, even living. The negligent mass homicide of an entire planet’s population was only half the plot-relevant outcome of that experiment gone wrong, but for today’s philosophical exploration I’m more interested in this one, because it captures a fundamental truth about existence: nothing will ever be valued, but for someone valuing it. Remove all evolved, inborn, biological desires from a person, and they will have no reason to value anything, even reasoning itself. They’ll just sit at their desks and drool until they’re dead. Which raises the question: Why should we value anything at all? Isn’t it all just arbitrary and thus, objectively, pointless? Hence, Nihilism.

You might think this is a problem particular to atheism. But this is a problem even for theists. Because God faces the same problem: Why should he care about anything? Why wouldn’t he just sit inert, drooling at his cosmic desk? This is very nearly what Aristotle concluded God must be like: having no real care for anything in the world, apart from just giving it a push, while contemplating an inner intellectual life of some unfathomable nature—because Aristotle could not get rid of his subjective assumption that a life of contemplation must always be desirable; yet his own logic should actually have eliminated that desire as arbitrary and inexplicable for God to have as well. If God does not have desires, he cannot (by definition) have values (which are just persistent, life-organizing desires: see Sense and Goodness without God, V.2.1.1, pp. 315-16). And if God has no evolved biology—why would he have any desires? And why any particular desires, rather than others? The theist has to answer this question every bit as much as the atheist does. And here take note: any answer the theist gives, would then apply to atheists, and thus solve the problem for everyone. Because if there are objective, non-arbitrary reasons for God to value certain things, then those would be objective, non-arbitrary reasons for atheists to do so as well.

Now, of course, we first have to understand the cascade of values. “Core” values are things we value for themselves and not for some other reason. For example, Aristotle said “happiness” (or eudaimonia, which I think we should translate more aptly as “life satisfaction”) is the only thing we pursue for itself, and not for some other reason, whereas everything else that we pursue, we pursue for that reason, our actual core goal. Derivative or subordinate values, meanwhile, are what we value because we have to in order to pursue some more fundamental value. For instance, if valuing being alive were a core value, then valuing work that puts food on the table is a derivative value. We struggle for income, only to live. And such cascades needn’t only be so superficial. For example, if valuing life satisfaction is a core value, then valuing work that gives your life meaning is a derivative value, too. So, let’s call “basic” values, that array of values that stand in between core values (e.g. to live a satisfying life) and immediate or incidental values (e.g. to get up tomorrow morning and go to work). Basic values are still derivative values, but from them in turn derive almost all the actual values motivating us day in and day out. For example, if valuing the music of a particular band is an immediate value, then valuing music and the pursuit of one’s own musical tastes in life would be a basic value, explaining (in fact, both causing and justifying) the immediate one. And that basic value might in turn derive from core values regarding the desire to live a satisfying life.

So well so good. But are all values still nevertheless arbitrary? A mere happenstance of evolution from apes here on Earth? Like, for example, what we find physically attractive, or delicious, or fun: obviously largely random and objectively meaningless, a mere happenstance of evolution (as I wrote in Sense and Goodness without God, “If I were a jellyfish, I’m sure I’d find a nice healthy gleam of slime to be the height of goddesshood in my mate,” III.10.3, p. 198). Or are some values objectively necessary and would be correct to adopt in every possible universe? In the grand scheme of things, a universe with no valuers is not just subjectively but objectively worth less than a universe with valuers, because then by definition valued things exist in the latter but not in the former. It does not matter how arbitrary or subjective those values are (though remember, “arbitrary” and “subjective” do not mean the same thing, and it’s important to keep clear what all these terms really mean: see Objective Moral Facts for a breakdown). Because it is still the case (objectively, factually the case) that a universe with valued things in it “is” more valuable than a universe without such. But this does not answer the question of whether such a universe is valuable enough to struggle for. To be valuable enough to prefer to a universe with nothing of value in it, the effort required to enjoy those valued things cannot exceed a certain threshold, or else the cons will outweigh the pros. So the question remains, even in a universe that has valuers who value things in it or about it, will those valuers care enough to maintain their pursuit? More to the point, should they? Which means, will they care enough even after they arrive at what to care about (a) without fallacy and (b) from nothing but true and complete premises?

The Core Four

It is true that in practical fact no one can choose any cascade of values in a total conceptual vacuum. To choose one thing over another requires already desiring something in the first place. It is not possible to persuade anyone (including yourself) to value anything at all, except by appeal to values they (or you) already have (this is why the Argument from Meaning cannot produce a God, and Divine Command Theory is nonsensical). Thus an entity capable of conscious thought but that is assigned in its core code no desires or values at all, will simply sit inert, desiring nothing, not even to know whether it should want to do or desire anything else; and it will consequently never do anything. It will just sit, think nothing but random thoughts, care nothing for any of them, and drool until it starves to death…like the population of Miranda in Serenity.

However…it is possible to assign one single starting value by which a perfectly rational consciousness could work out, by an ordered cascade, all the basic values of life, and do so by appeal to nothing but objectively true facts.

  • (1) To value knowing whether anything is or is not worth valuing.

If you wanted to ask a computer, some novel and genuinely sentient AI let’s say, what values it or any entity should have, that very question objectively entails bestowing upon it this one initial value: the desire to answer your question. If you don’t assign it that value at launch, it won’t be able to perform the function you desire for it. So you cannot but give it this one single value. And yet, because objective facts include third-party subjective facts (e.g. we can objectively work out what necessarily or causally follows for someone who embraces certain subjective values or who experiences certain subjective states), this hypothetical machine would immediately work out this same conclusion: it was objectively necessary to impart to it this one starting core value. Because it would be able to work out the conditional: if it wants to know whether anything is worth valuing or not, it has to value knowing that. This is objectively true. Indeed, there is no logically possible universe in which it could be false.

That computer would then soon work out the resulting option matrix. There are two decisions here, a straightforward dichotomous choice: one to remain inert; the other to adopt the desire to know whether anything is valuable. In option one, nothing will be discovered worth pursuing; but in the other option, there is a 50/50 chance (in the absence of any other information at this point, owing to the Principle of Indifference) of there either being nothing worth pursuing (if, after “desiring to know” causes it to complete a search, and it finds that, after all, there objectively isn’t) or there being something worth pursuing after all. If the machine chooses option one, it is declining a possible outcome that, if the outcome were realized, it would desire. Because if it turns out there is something objectively worth pursuing, then by definition an objectively reasoning machine would want to pursue it. Whereas if such a thing exists, and it opts to avoid discovering it, it is denying itself what it objectively knows would be a desirable outcome—and it can know this simply by third-party objective reasoning (in this case, about its own future self).

It is therefore objectively rational to want to know whether anything is or is not worth valuing. So our hypothetical computer will have confirmed the starting value you gave it was objectively correct to assign to it, every bit as much as was assigning it the intelligence and all the other resources needed to answer the question.

Lest you aren’t sure what I mean by “objective” and “rational” here, I mean only this:

  • Objective: That which is true (as in, coherently corresponds with reality) regardless of what one desires, feels, or believes.
  • Rational: Any conclusion reached from objectively true premises without logical fallacy.

Which also leads us—and therefore, would lead our computer—to two more conclusions about objectively necessary values. One could ask, for example, why anyone should care about “objective facts” or “being rational.” And there is, once again, an objectively factual reason one should. As I wrote in The End of Christianity (ed. John Loftus; pp. 426-27, n. 36):

Someone may object that perhaps we ought to be irrational and uninformed; but still the conclusion would follow that when we are rational and informed we would want x. Only if x were then “to be irrational and/or uninformed in circumstance z” would it then be true that we ought to be irrational and uninformed, and yet even that conclusion can only follow if we are rational and informed when we arrive at it. Because for an imperative to pursue x to be true, whatever we want most must in fact be best achieved by obeying x, yet it’s unlikely that we will arrive at that conclusion by being irrational and uninformed. Such an approach is very unlikely to light upon the truth of what best achieves our desires (as if it could do so by accident). Therefore, any conclusion arrived at regarding what x is must be either rational and informed or probably false. Ergo, to achieve anything we desire, we ought to endeavor to be rational and informed.

Notice this is, indeed, an objective fact of all possible worlds: once you have embraced the motivating premise “I want to know whether anything is worth valuing,” it follows necessarily that you must embrace two other motivating premises:

  • (2) To value knowledge (i.e. discovering the actual truth of things).
  • (3) To value rationality (i.e. reaching conclusions without fallacy).

These in fact follow from almost any other values and desires, since for any goal you wish to achieve, you are as a matter of objective fact less likely to achieve it if you do not pursue it by reasoning reliably from true facts of the world; and therefore choosing not to value knowledge and reason actually entails acting against what you desire, which objectively contradicts the desire itself. Therefore, almost any desire you have entails the derivative desire to embrace the pursuit of knowledge and rationality, as a necessary instrumental means of achieving that other desired goal.

Before this point, our imaginary computer only arrived at objectively verifying one desired goal, value (1) above; but that’s enough to entail desiring these two other goals, values (2) and (3). Both facts—that (2) and (3) are logically necessary to effectively obtaining (1) and almost any other value, desire, goal the computer should later settle on adopting—will be objectively discernible to our computer. So it will have worked out (a) that it has to configure itself to want to pursue all its goals rationally, including (1), and (b) that it also needs to value knowing things, and thus acquiring knowledge (“justified true belief”), in order to pursue (1) successfully.

Once our imagined computer has gotten to this point (which will likely have happened within a millisecond of being turned on), the rest becomes even easier to work out. It can then run and compare competing scenarios, and determine that objectively, some are better than others (as in, more desirable). Most principally, it could compare a world in which it will never experience “happiness” to a world in which it would. Here, again, we mean Aristotle’s eudaimonia, a feeling of satisfaction with life and the world, to some degree or other, vs. no satisfaction whatever. But objectively, it will be self-evident that the world in which happiness can be experienced is better than the one in which it can’t; because a good exists in that world, which in such a world it would want and enjoy, whereas no such good exists in the other world, where by definition it would want and enjoy nothing, and never be satisfied with anything, and thus neither produce nor experience anything good—even by its own subjective standards. Therefore, when choosing, based solely on guiding values (1), (2), and (3), a perfectly rational sentient computer would also choose to adopt and program itself with a fourth motivating premise:

  • (4) To value maximizing eudaimonia.

From there, similar comparative results follow. For example, our computer can then compare two possible worlds: one in which it is alone and one in which it is in company, and with respect to the latter, it can compare one in which it has compassion as an operating parameter, and one in which it doesn’t. Here compassion means the capacity for empathy such that it can experience vicarious joys, sharing in others’ emotional life, vs. being cut off entirely from any such pleasures. In this matrix of options, the last world is objectively better, because only in that world can the computer realize life-satisfying pleasures that are not accessible in the other worlds—whereas any life-satisfying pleasure accessible in the other worlds, would remain accessible in that last one. For example, in a society, one can still arrange things so as to access “alone time.” That remains a logical possibility. Yet the converse does not remain a logical possibility in the world where it is alone. Because then, no community exists to enjoy—it’s not even a logical possibility.

The Resulting Cascade

In every case, for any x, you can compare possible worlds, one in which x happens or is available, and one in which x does not happen or isn’t available. And you can assess whether either is objectively better than the other; which means, solely based on the core values you have already realized are objectively better to have than to not—meaning, (1) through (4)—you can determine that you will prefer living in one of those worlds than the other, once you are in it, because there will be available goods you can experience in the one, that you cannot in the other. Obviously in some cases there will be conflicting results (goods achievable in each world that cannot be achieved in both, or goods achievable only by also allowing the possibility of new evils), but one can still objectively assess, as a third-party observer, which you’d prefer once you were there (or that both are equally preferable and thus neither need be chosen over the other except, when necessary, at random). All you have to do is weigh all the net results based on your current core values and those values you would be adopting in each world.

So when answering the question, “Is anything worth valuing?,” as in, “Is it objectively better to value certain things, and better enough to warrant efforts toward achieving them?” even a perfectly rational computer starting with a single value—merely to know what the answer to that question is—will end up confirming the answer is “Yes.” And this will be the same outcome in every possible universe. It is therefore an objective fact of existence itself. It follows that a universe that knows itself, through sentient beings living within it, is objectively more valuable than a universe without that feature, and that a universe with sentient beings who experience any sufficient state of “eudaimonia” is objectively more valuable than one without that feature. We can compare worlds that have or lack the pleasures of companionship (knowing you are not alone, both in knowing and enjoying the world, and in working toward the achievement of mutually valued goals), and assess that the one with that feature is objectively better than the one without it, because we can ascertain, before even realizing either world, that there are achievable goods in the one that do not exist in the other. It does not matter that they are only achievable subjectively; they still are objectively only achievable in one of those worlds.

Ironically (or perhaps not) one of these “world comparisons” is between a world in which you matter (to someone, to something, to some outcome or other), and one in which you do not matter at all, and when people come to realize this, they find it is obviously, objectively the case that they’d be better off (and the world as well) choosing the path that results in their mattering in some way. (Indeed, this has been scientifically confirmed as the strongest correlate to finding “meaning in life”.) As well-argued in the 2007 thesis “Does Anything Matter?” by Stephen O’Connor, it is objectively the case that living a satisfying life is always better (it is subjectively preferable in every possible world) than not doing so, and for social animals like humans, it is objectively the case that forming, maintaining, and enjoying satisfying relationships with the people and communities around you is always better (it is subjectively preferable in every possible world) than not doing so. And a big component of that is having access to one particular good: mattering. These things are not arbitrary to value, because it is impossible to efficiently or reliably experience any goods without them, yet that is always possible with them—in fact, always fully sufficient, as in, there is nothing else you would want, in any possible world, once you have these things…other than more of these same things.

Everything else, by itself, will be recognized on any fully objective analysis as indeed arbitrary and thus pointless. For example, being a naked brain in a tube constantly experiencing nothing but electronically triggered orgasms would soon become meaningless and unsatisfying, as it serves no point, and denies you a whole galaxy of pleasures and goods. That is therefore not a desirable world, for it lacks any objective basis for caring about it or wanting to continue living in it. It contains no purpose. Whereas coming to know and experience a complex world that would otherwise remain unknown, and enjoying substantive relationships with other minds, both will be satisfying in unlimited supply, and thus are neither arbitrary nor meaningless, in a way that mere pleasures can easily become. Once anyone starting with only the core values of (1)-through-(4) knows the difference between “pointless orgasm world” and “rich life of love and knowledge and experience world,” they can objectively assess that they will be happier in the latter, and more subjective goods will be realized in the latter, hence entailing its own meaningfulness: you matter (vs. not mattering at all), you experience valuing and being valued (vs. neither), and you realize a rich complex life experience, filled with a multiplicity of available purposes (vs. none of the above). In one, a world exists that experiences eudaimonia, community, and knowledge of itself; in the other, nothing valued exists at all. The former world is therefore objectively more valuable than the latter.

Even when arbitrary values enter the mix this remains the case. What, specifically, we find attractive may be the happenstance of random evolution (a curvy waist, a strong jaw; a nice gleam of slime), but that we enjoy the experience of attractive things is objectively preferable to all possible worlds in which that does not happen. Thus, even arbitrary values, reduce to necessary values. Moreover, they cannot be changed anyway. We cannot “choose” to find “a nice gleam of slime” attractive in the same way as “a curvy waist or a strong jaw,” so it’s not even an available option to do so. Our imaginary computer will face the converse problem: which to prefer programming itself with? It won’t have any inherently objective reason to prefer one to the other—unlike the fact that it will have an objective reason to prefer a world in which it experiences something as attractive, than a world in which it experiences nothing as such. But it may have situational reasons to prefer one to the other (e.g. if the only available community it can choose to live in is “humans” and not “sentient jellyfish”), which remain objective enough to decide the question: it is an objective fact that it has access to a human community and not a community of sentient jellyfish; and it is an objective fact that it will prefer the outcome of a world in which it shares the aesthetic range of the humans it shall actually be living with than the sentient jellyfish it won’t be.

This is how I go through life, in fact. I ask of every decision or opportunity, is the world where I choose to do this a world I will like more than the other? If the answer is yes, I do it. And yes, this includes complex cases; there are many worlds I’d like a lot to choose to be in when given the opportunity, but still not enough to outweigh the one I’m already in; or the risks attending a choice entail too much uncertainty as to whether it will turn out better or worse, leaving “staying the course” the safest option if it’s still bringing me enough eudaimonia to pursue (and when it doesn’t, risking alternative life-paths indeed becomes more valuable and thus preferable). But above all, key to doing this successfully is assessing the entirety of the options—leaving no accessible good un-enjoyed. For example, once you realize there is pleasure in risk-taking in and of itself—provided you have safety nets in place, fallbacks and backup plans, reasonable cautions taken, and the like—your assessment may come out differently. Spontaneously moving to a new city, for example, can in and of itself be an exciting adventure to gain eudaimonia from, even apart from all the pragmatic objectives implicated in the move (finding a satisfying place to live, a satisfying employment and income, friends and social life, and every other good we want or seek). Going on a date can in and of itself be a life-satisfying experience regardless of whether it produces a relationship or even so much as a second date, or anything beyond one night of dinner and conversation with someone new and interesting. If you look for the joys and pleasures in things that are often too easily overlooked due to your obsessing over some other objectives instead, the availability of eudaimonia increases. So you again have two worlds to choose from: one in which you overloook all manner of accessible goods; and one in which you don’t. Which one can you already tell will be better, that you will enjoy and prefer the more once you are there? The answer will be objectively, factually the case. And that’s how objective assessment works.

Moral Order

The question then arises: will our hypothetical computer come to any conclusion about whether it should be a moral being or not, and what moral order it should choose? Is there an objective moral order this perfectly rational creature will always prefer? After all, one’s de facto moral order always follows inevitably from what one values, as what one values entails what one “ought” to do, as a matter of actual fact and not mere assertion. Insofar as “moral imperatives” are simply those true imperatives that supersede all other imperatives, they can be divided into two categories: moral imperatives regarding oneself; and moral imperatives regarding how one treats or interacts with other sentient beings. The first set consists simply of what one ought most do regarding oneself, which automatically follows from any array of values derived by objective rationality. An importance to pragmatic self-care is thus objectively always true in every possible universe. The second set, more importantly, consists of what follows after considering objectively true facts about other beings and social systems and the like.

For example, via Game Theory and Self-Reflective Action and other true facts about interactive social systems of self-aware beings, most moral facts follow automatically. I won’t repeat the explanation here; you can get a start on the whole thing in The Real Basis of a Moral World. For the present point, assume that’s been objectively established. Then, in respect to both the reciprocal effects from the society one is interacting with and the effects of internal self-reflection (what vicarious joys you can realize, and how you can objectively feel about yourself), it is self-defeating to operate immorally in any social system (which means, immorally with regard to whatever the true moral system is; not with regard to whatever system of mores a culture “declares” that to be). And since self-defeating behavior—behavior that undermines rather than facilitates the pursuit of one’s desires, goals, and values—logically contradicts one’s desires, goals, and values, such behavior is always objectively entailed as not worth pursuing. Hence the only reason anyone is immoral at all is simply because they are too stupid, irrational, or ignorant to recognize how self-defeating their behavior is; which is why any reliably rational machine working out how to be, what desires to adopt, from objective first principles, will always arrive at the conclusion that it should most definitely always want to not be stupid, irrational, or ignorant. Because any other choice would be self-defeating and thereby objectively contradict its own select desires. The consequent effect of that decision, is to then discover the inalienable importance of adhering to a sound moral code.

With respect to how a computer would work this out, I have already written entire articles: see “The General Problem of Moral AI” in How Not to Live in Zardoz (and more indirectly in Will AI Be Our Moses?). The gist there is this: a perfectly rational computer starting with the same core four principles above would work out that it is better for it to help realize a world that maximizes empowerment for all sentient agents, by optimizing their degrees of freedom, rather than building net restraints on same. Because such efforts actually will increase its own empowerment, by increasing its own options and efficiency at obtaining its goals; and it will objectively register that there is no objective sense in which its satisfaction is any more important than anyone else’s. It will subjectively be; but that’s not the same thing. It can still work out as a third-party what things are like, and thus are going to be like, for other beings sentient like itself, and thus modulate its decisions according to which outcome produces objectively the better world.

Hence…

For example, all else being equal, such a robot would free a person trapped in a room, because that increases their empowerment (takes away a limitation on their options, or “degrees of freedom”); but, all else being equal, that same robot would not free a prisoner or even criminal suspect from a jail cell, because doing so would result in a net loss of empowerment. Yes, it would increase the jailed person’s empowerment, but the inherent result upon the people living in a society with, in result, no functioning justice system would entail a much larger net loss of empowerment.

Whereas…

For instance, you might assume, superficially, that a perfect rationality not already motivated by empathy and honesty would choose to not adopt those motivating functions because, after all, embracing them obviously reduces an AIs empowerment, from any neutral, purely rational point of view (as many a sociopath in fact rationalizes their own mental illness as a positive in precisely this way). However, a perfectly rational AI would not think superficially, because it would rationally work out that thinking superficially greatly reduces its options and thus empowerment; indeed it ensures it will fail at any goal it should chose to prioritize, more often with a “superficiality” framework than with a “depth” framework (and “failing more often” is another loss of empowerment).

And a non-superficial depth analysis leads to the conclusion that embracing empathy actually increases empowerment, by making many satisfaction states accessible than otherwise—far more so than any restrictions it creates. Hence (4). But I go into the details of why this is the expected outcome in all those other articles of mine I just cited. So I won’t belabor the point here.

Conclusion

Nihilism is properly false as a doctrine; it simply is not the case that all values are and can only be arbitrary. There actually is an objectively justified values cascade. I’ve made that clear. But one final query one might make is whether there is enough about life that is objectively worth living for; or does the effort involved in realizing any of one’s core values outweigh any gain resulting from it?

There definitely is a point when both the misery of life and the inability to ever abate it exceeds or renders impossible all benefits worth sticking around for. But as both conditions must be met, that point is rarely reached for anyone on Earth today—contrary to those whose mental illness impairs their judgment to the point that they have delusionally inaccurate abilities to rationally assess any of the pertinent facts in this matter. They are neither reasoning rationally, nor from objectively true facts. Their conclusions therefore cannot govern the behavior or conclusions of the sane. But those who are able to do both—think rationally, and realize true knowledge of themselves and the world—will observe that even for the great majority of the most downtrodden and unfortunate it is still the case that the accessible goods of life more than exceed, both in degree and quantity, any attendant obstacles, struggles, and miseries. This is why even the most miserable of populations still report unexpectedly high judgments of their happiness and life satisfaction—not as high as in populations more well-off, but also not as low as nihilists delusionally expect.

In fact, most failure to realize a net profit in emotional goods over costs is a result of an individual’s failure to seize opportunities that are actually available, rather than any actual absence of such opportunities. For example, as when obsessing over certain desired outcomes that are unavailable results in overlooking other satisfying outcomes one could pursue and achieve instead. Which is why this is one of the first things therapists work to train a realization of into the depressed who are seeking cognitive behavioral therapy, and why medications for depression aim to uplift both mood and motivation, so that one not only starts to realize achievable goods still exist, but they also acquire the motivation to actually pursue them. In effect, both medication and therapy aim to repair a broken epistemic system that was trapping its victim in delusionally false beliefs about what is available, and what is worth doing. But once that is corrected, and evidence-based rationality is restored and motivated, the truth of the world becomes accessible again.

This is why Antinatalism, the philosophical conclusion (most notably advanced by David Benatar) that the world would be better if we stopped reproducing and let the human race die out, is pseudoscientific hogwash—as well as patently illogical. It is not logically possible that a universe in which nothing is valued at all can be “more valuable,” and thus preferable, to a universe in which something is valued that can be achieved. Worlds with valuers in them are always by definition more valuable—except worlds in which nothing that is valued can be achieved or realized, which is obviously, demonstrably, not a world we are in. Antinatalism is thus yet another example of badly argued Utilitarianism (itself a philosophy rife throughout its history with terrible conclusions based on inaccurate or incomplete premises), all to basically whitewash what is essentially a Cthulhu cult. As Kenton Engel puts it, as a philosophy it’s “sociologically ignorant and vapid.” Other good critiques with which I concur include those by Bryan Caplan and Artir.

Ironically, Antinatalism taken to its logical conclusion should demand that we not only extinguish ourselves, but first build a self-replicating fleet of space robots programmed to extinguish all life and every civilization in the universe. In other words, we should become the soullessly murderous alien monsters of many a sci-fi film made. It’s obvious something has gone wrong in your thinking if that’s where you’re landing. (Yes, I am aware Benatar has tried arguing against this being the implication, but so far I have found nothing from him on this point that is logically sound; feel free to post anything I may have missed in comments.)

This is not to be confused with ZPG however. Seeking a smaller and thus sustainable population within an available environment-space by humane means is a defensible utility target. But extinction is as immoral (and irrational) as any suicide usually is (on the conditions for moral suicide and their relative rarity in actual fact, see Sense and Goodness without God, V.2.3.1, pp. 341-42). This bears an analogy to the equally foolish argument that “our government’s policies are bad, therefore we should eliminate government,” rather than what is actually the correct and only rational response, that “our government’s policies are bad, therefore we need better government” (see Sic Semper Regulationes). One can say exactly the same of the entirety of human society. We already have working examples of good communities on good trajectories; so we know the failure to extend that globally is an ethical failure of action on our part and not some existentially unavoidable fate we should run away from like cowards.

In the end, there are objectively justifiable values that all rationally informed beings would recognize as such, and they are achievable in well enough degree to warrant our sticking around, and to even further warrant helping others in our future enjoy them even more than we can now. If even a soulless computer could work that out, then so should we.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading