In 2020, Christian philosophers Kenneth Boyce and Philip Swenson presented a thesis at a conference, which has yet to appear under peer-review (though a version is in review), arguing that “fine tuning” is actually evidence against a multiverse. This is strange because it is most definitely the other way around. Consequently, a patron funded my research-attention on this, as they wanted to know how these guys are getting the wrong conclusion. Boyce and Swenson’s presentation paper was titled The Fine-Tuning Argument Against the Multiverse; and Boyce blogged a popular summary of it for the American Philosophical Association under the same title. He also was interviewed on it at Capturing Christianity. I reviewed all these materials to ensure I correctly understood what they are trying to argue.

Background

My readers will probably be well familiar with the concept of “fine tuning” as a reference to apparent “coincidences” in the fundamental “settings” of the universe (the so-called “physical constants”) that allow life to likely be a random outcome of it. They might also know how multiverse theory would render this inevitable: with countless randomly selected universes actually existing, the probability that at least one would exhibit apparent fine tuning and thus produce life is as near to 100% as makes all odds. Thus, as an explanatory hypothesis, it does away with any need for intelligent design. In fact these are arguably the two most popular hypotheses to explain “fine tuning,” the (often vague and hand-wavy) idea of intelligent design, and multiverse theory. There are other explanations (see A Hidden Fallacy in the Fine Tuning Argument). But the question today is the relative probability of these two more popular ones. So just to be easy, I will pretend here as if these are the only contenders.

In actual science, conducted by real scientists in the field of cosmology, intelligent design is a fringe position at best. It has no foundation (unlike the foundations of multiverse theory, there is zero science establishing that the required supernatural or even preternatural powers or entities even exist). And it has never produced a successful peer-reviewed cosmological model (it only gets bandied about in philosophy; whereas it never succeeds as an actual explanation of cosmological observations). By contrast, multiverse theory is the leading position among actual scientists for exactly the opposite reasons: most of the leading theories of cosmology, which actually explain a large number of bizarre observations and rely on well-established findings of science (like quantum mechanics and nonlinear dynamics), happen to entail or fail to prevent multiverse solutions (see Six Arguments That a Multiverse Is More Probable Than a God). That is too unlikely an epistemic coincidence. It suggests that’s the real explanation for fine tuning.

But even from a mere philosophical perspective, we can treat Intelligent Design (ID) and Multiverse Theory (MT) as competing hypotheses, and ask (as we should) what they each predict differently than the other if they were true, and then go and look and see which observations bear out. In other words, we should ask, and answer, the question of what caused fine tuning with the scientific method, rather than the armchair “wishful thinking” methodologies of theology. Because epistemically it simply comes down to: which hypothesis makes our actual observations more likely. One might concede MT wins this battle over ID, and try to fight a battle over their relative prior probabilities instead, but theism performs poorly there once you start treating it correctly (see, again, A Hidden Fallacy in the Fine Tuning Argument). But the Boyce-Swenson thesis is that something about fine tuning as evidence makes MT less likely than ID, and that’s not an argument over priors, but likelihoods: they are claiming the evidence is less likely on MT than on ID. And their premiere evidence here is “fine tuning” (or FT) all by itself.

Which gets us to what’s strange about this. Or maybe it’s not strange, given that Boyce and Swenson’s approach commits a typical error of almost all Christian apologetics: they leave evidence out that, when you put it back in, completely reverses their conclusion.

The Boyce-Swenson Argument and Its Critics

But we’ll get to that. First, let’s summarize their argument. Let’s set aside their ancillary claim that “If the [ID] hypothesis is false and there is only one universe, then it seems extraordinarily improbable that the fundamental constants of nature would just so happen to fall within the life-permitting windows.” That’s true but moot, as using that as an argument commits the usual Fallacy of Fine-Tuning Arguments: ignoring the deeply problematic question of the vanishingly small prior probability of a God (or any requisitely convenient Being), which no evidence makes any more likely than a single instance of random “lucky” fine tuning. Getting a God requires even more luck than that. So while in the SU model the Likelihood Ratio will favor ID, the Prior Odds would not, and these would cancel out, leaving us no more the wiser whether ID or SU (a chance-incident single-universe model) is the more epistemically probable. These probabilities are both simply inscrutably small. Boyce and Swenson aren’t resting their argument on that, though, and I’ve already addressed this problem elsewhere. Moreover, MT eliminates even the likelihood disparity here, and thus is an explanatorily superior hypothesis to SU, particularly given that it rests now on solid scientific foundations and is not just some possibility being posited out of hand. So let’s stick with that here.

Boyce and Swenson lean on the prior arguments of apologists Ian Hacking and Roger White, who decades ago maintained that this reasoning was an “inverse gambler’s fallacy” (a concept invented by Hacking). As Boyce and Swenson admit, this argument only gets to a “you can’t tell either way” conclusion, and not a conclusion actually supporting ID. They want to push it further. But that’s a problem if the Hacking-White argument isn’t even applicable here—then there isn’t even a stool left for the rest of the Boyce-Swenson argument to stand on. And lo, Hacking’s application of this notion to MT has already been refuted: in an online paper written-up by Darren Bradley, “A Defence of the Fine-Tuning Argument for the Multiverse” (May 2005), and in a peer-reviewed paper by John Leslie, “No Inverse Gambler’s Fallacy in Cosmology,” Mind 97.386 (April 1988), pp. 269–272. Both present formal proofs of the inapplicability of Hacking’s argument. But the gist is simply that Hacking (as also White) is ignoring the selection effect present in the cosmology case that isn’t present in their analogies—such as Hacking’s example of a gambler watching their first-ever roll of dice. As Bradley puts it, “a condition must be satisfied for an inverse gambler’s fallacy to be made that is not satisfied in the cosmology case.”

Boyce-Swenson’s Fatal Mistake

Their error is analogous to the mistake people make in the Monty Hall Problem, when they forget to account for the selection effect (Monty Hall had limited choices as to which door to open, thus rearranging the probabilities), which adds information to the scenario, and it’s that information that changes your conclusion. Leslie provided the most salient analogy:

You catch a fish of 12.2539 inches. Does this specially need to be explained? Seemingly not. Every fish must be of some length! But you next discover that your fishing apparatus could catch only fish of this length, to within one part in ten thousand. It can now become attractive to theorize that there were many fish in the lake, fish swimming past your apparatus until along came just the right one.

“No Inverse,” p. 270

By contrast, Hacking-White are talking about a gambler rolling two standard dice, seeing them make box cars, and then inferring there must have been a lot of prior rolls of those dice—an obvious fallacy. Ironically, this being the case is more a problem for the theist, though that was not noticed by Leslie or Bradley: suppose the universe shall go on infinite years; that would mean any event of any probability will inevitably occur; but if that’s the case, the most improbable event could have occurred as likely at the beginning as anywhere else; consequently a single universe makes fine tuning by chance perfectly likely again, and you can only get away from that realization by resorting to Hacking’s “inverse gambler’s fallacy,” forgetting that a chance twelve on the dice as as likely to be the first roll as any other in a running series of rolls. So, really, it’s the theists who are committing the inverse gambler’s fallacy. They are using it to conclude someone cheated; but for exactly the reasons Hacking and White explain, we actually can’t tell from the first roll being lucky that anyone cheated (I’ve pointed this out before, in The End of Christianity, p. 293). Because that roll was just as likely to be lucky as any other.

But that isn’t the problem Leslie and Bradley point out. They are focused on the misapplication of the fallacy to MU. The difference is between a gambler rolling dice (or watching them rolled) and then explaining the result, and a gambler staring at a hundred rooms and being told they will only be brought into one of those rooms to see what was rolled if the dice roll twelve. The latter gambler has information that the former gambler does not: the selection that guarantees they will only get to see twelves. This actually allows the inference that other dice have been rolled, owing to a simple rule of probability: you are always more likely to be typical than exceptional. Which is actually a tautology—it just restates the fact that more probable things are more probable (more frequent) than less probable things. But that tautology has consequences here.

This is why a gambler rolling dice can’t say it is “more typical” for twelves to be rolled first, or last or anywhere in particular. But a gambler can say it is “more typical” for someone to win a lottery if that gambler wins a lottery. For example, if the gambler wins a lottery they know to be millions to one against, and has no other information than that, they can rightly conclude it is more likely that there are lots of lottery players and that people win all the time, than that that lottery has only been played once and only by them, and yet they won. That would make them exceptional (rare), not typical (commonplace). And they are more likely to be commonplace than exceptional. And of course evidence bears this out: pop the hatch and look around, and lo, there are thousands of people who have won lotteries; “millions to one” lotteries are won every month—it’s routine, not exceptional. While the external evidence could have reversed that conclusion (we could pop the hatch and see we were the only payer all along), before we get to check that evidence it is (from our epistemic perspective) unlikely that’s what the evidence will turn out to be; because it is, literally, the least likely (the most uncommon) situation we could be in (of all the possible situations we could be in).

So the Hacking-White fallacy does not apply to fine tuning, because fine tuning adds information: a selection effect guaranteeing we will only ever see a win—that, in effect, we will only ever reel in fish precisely 12.2539 inches long. In that case, it simply is far more likely that we are the victim of a selection effect than that we were “just lucky.” (I already pointed this problem out in my chapter on design arguments over a decade ago in The End of Christianity, pp. 196–98, with the corresponding notes on pp. 411–12. I find that theists, even mathematicians among them, have a very hard time understanding this point. Which may go to explain why they remain theists.)

In the open scenario, where a gambler encounters dice for the first time and then rolls them, the gambler has no information by which to decide whether that was the first ever roll of dice or a late roll of the dice—unless they have outside evidence regarding this, which is another problem with the Hacking-White argument: it ignores the fact that we have such evidence. We are not just waking up in The Matrix and tasked with explaining how we got there. We have a lot of external evidence bearing on the question, just as in the real world a gambler knows dice have been rolled a gazillion times already and are being rolled a gazillion times around the world as they themselves roll theirs. In the real world (not the fictional fantasy world Hacking-White invented), gamblers know things—they have evidence regarding how often dice have gotten and are being rolled, and even regarding the physics of dice. Just as we now do regarding fine tuning—we have a lot of external evidence bearing on the question of what caused it, by which we can comparatively test explanations like ID or MT. And test them against that evidence we must. But the point here is that in addition to all that other evidence (which I’ll get to), fine tuning itself already gives us a crucial piece of evidence: that our observation has already been selected by the thing we are trying to explain. We are the gambler in the closed lottery scenario (with the many closed rooms), or Leslie’s fishing scenario. We are not the gambler in Hacking-White’s scenario. We have more information than they do—already. And we get even more when we pop the hatch and look around. Which we have.

So already the Boyce-Swenson argument is screwed. Their reliance on the invalid Hacking-White thesis collapses their entire case before it even gets started. As Bradley puts it (my emphasis):

The Inverse Gambler’s Fallacy is only committed if the specific evidence refers to a trial that has the same probability of existing in any relevant possible world. [But o]ur universe is more likely to exist given that there is a Multiverse rather than a Universe. So this objection to the Fine-Tuning argument for the Multiverse does not work.

And this inference follows from the information given to us by the inherent selection effect of “fine tuning.” Since, on MT, we will only ever observe a finely tuned world (since no life arises in other worlds), this fact gives us information about how we got here, such that, if we have independent reasons to doubt ID (as we do: see, for example, The Argument from Specified Complexity against Supernaturalism and Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them), then fine tuning is evidence for MT. Others have noted it’s even worse than that, that FT is in fact evidence against ID (a surprising but unavoidable fact, which I’m getting to). But the point here is that FT by itself makes MT more likely than SU, and therefore is evidence for MT. This is not an inverse gambler’s fallacy.

At most one can say here, “But ID also makes FT likely; so FT alone can’t help us distinguish between ID and MT,” which is true (if we grant the premise, the conclusion follows), just as one can say, “But ‘someone cheated’ also makes my first roll being twelve likely; so my first roll being twelve can’t help me distinguish between ‘someone cheated’ and my just being lucky.” But that is not the situation we are in. We have information bearing on whether someone cheated, and that information renders it unlikely—but even with no information, we’d still need a lot more evidence than “my first roll was a twelve” to get us to “someone cheated.” Analogously, we have information bearing on whether ID caused FT, and that information renders it unlikely—we’d need a lot more evidence than FT to get us to “ID caused it.” And as it happens, when we go looking, all the evidence goes the other way. ID simply doesn’t pan out as likely. This is another common error in apologetics: Misunderstanding the Burden of Proof. And that’s even granting the premise—when, actually, the premise that “ID makes FT likely” happens to be false. God has no need of FT in exactly the same sense as God has no need of a starship.

The Boyce-Swenson argument ignores this and tries to build on the Hacking-White thesis such that if we grant that ID is a more probable cause of FT than MT, then this being the case makes MT less likely than SU. But this is a circular argument. You have to start by concluding you don’t need MT to explain FT in order to end up concluding you don’t need MT to explain FT. This is unsound logic. Whether ID is a more probable cause of FT than MT is precisely the thing we are supposed to be asking, not presuming. And at any rate, it is tautologically obvious that if you prove God made our world, it’s then unlikely he used a Multiverse to do it. They grant the possibility he did; their argument is simply that it’s not the most likely conclusion in that case, which is fair enough. One might challenge that, but I have no interest to. Because this is a very uninteresting conclusion. And the fact is, they never establish this tautological premise anyway; it is in fact false. And they never address any of the scholarship that already proved it false.

So, in the end, Boyce and Swenson’s argument proceeds as follows: they presume ID renders FT likely whereas, owing to Hacking-White’s inverse gambler’s fallacy, the uncertainty of MT supposedly renders FT a 50/50 prospect at best, an “unknown” rather than a dead certainty (in effect saying “you can’t tell from the dice coming up twelve the first time that there were any other rolls of the dice,” which means it’s 50/50 at best that there were); and this then allows their conclusion that FT more likely correlates with ID than MT, which entails MT is therefore not likely. But this is wrong seven ways from Sunday. There is no evidence FT is likely on ID. Whereas there is evidence that is unlikely on ID but likely on MT. And since MT does not implicate Hacking-White’s inverse gambler’s fallacy, the conclusion that MT explaining FT is at best “50/50” is simply false.

To the contrary, given MT, P(FT) is effectively 100%: there will then be, to a certainty near 1, at least one life-containing universe and therefore at least one FT observation; and indeed, per the Hacking-White fallacy, we are as likely to be it as any. The most they could try to get is, given ID, P(FT) is also effectively 100%, leaving us at a wash between ID and MT in explaining FT. They can’t even get to that, though, owing to the fact that the premise “ID entails FT” is false. But assume they nevertheless could. That still disallows the Boyce-Swenson thesis. Because you can’t get from “ID and MT are equally likely given FT” to “ID is more likely than MT,” and therefore you can’t get to “MT is unlikely.” It all collapses like a house of cards—once you realize the error they made at the very beginning.

Formal Identification of the Error

This error formally appears when Boyce and Swenson claim P(L|¬T&S&K) = P(L|¬T&¬S&K), or P(Life|not-Theism, and Single Universe, and everything else we know) = P(Life|not-Theism, and not-Single Universe, and everything else we know), i.e. that the probability of life without ID is the same whether a single universe exists or a multiverse. In fact it is false that a single godless universe is as likely to generate life as a multiverse. That’s like saying a single draw of poker is as likely to generate a royal flush as a million draws of poker. Sorry, but, no. They have screwed up here, by equivocation fallacy confusing “this universe” with “a universe.”

Boyce and Swenson are actually saying “the probability that this universe would be life permitting is the same” on SU and MU, which is a tautology (the isolated—as opposed to total—improbability of this is the same whether SU or MU; just as the probability of drawing a royal flush right off the top of a deck of cards is always the same), and then confusing that with, “the probability that there would be any life-permitting universe is the same.” Which is false—owing to their misapplication of the inverse gambler’s fallacy. Instead, as we all know, the probability of someone drawing a royal flush is increased by the number of draws, so add-in the selection effect (you only get to look at what was drawn if it is a royal flush), and then “there were a million draws” is always more likely (by far) than that there was only one.

This destroys their argument, because it absolutely depends on this false premise, that the probability of life without ID is ‘the same’ whether a single universe exists or not (Boyce & Swenson, pp. 9–10). This mistake is the same as saying the probability of a royal flush is always the same, therefore it is the same whether there were a million draws or only one, which is false. The inverse gambler’s fallacy requires that there be no selection effect, such that the gambler cannot know whether they are at the first draw of a series or a later one. But with fine tuning, there is a selection effect. Our pole only catches fish exactly 12.2539 inches long. And thus, our catching a fish is simply far more likely if there are many fish of varying length. So, too, for a multiverse explanation of our existence.

The Genuine Bayes Factor

In any genuine (and not sham) Bayesian argument, the final (or “posterior”) probability of a hypothesis is only correctly describable as P(h|e&b), where h is the hypothesis, the symbol “|” means “given that,” and e is all the evidence one can adduce for or against h (“evidence”), and b is all other human-available knowledge (“background data”). The notable thing here is that e and b must be exhaustive of all available knowledge: you cannot leave anything out. If you know something, it goes in. Apologetics always operates by leaving things out (see Bayesian Counter-Apologetics).

If you derive a P(h|e&b) but e and b were not complete (or you ignored the effect of their contents on P, which amounts to the same thing), your conclusion does not describe reality. It describes a fictional alternate reality in which what you left out doesn’t exist. But it does exist. So the only way you can claim your P(h|e&b) applies to the real world we actually live in is if you complete e and b and properly derive P therefrom. This never goes well for the apologist. Which is why they always avoid doing it, and hope you don’t notice (and you probably won’t notice, unless you study how all empirical arguments are Bayesian—which is why understanding this is absolutely crucial to modern critical thinking now).

When asking what predictions ID and MT differentially make—what observations they predict differently from each other—we get entirely different results than Boyce and Swenson. First, of course, ID predicts far more intelligent and values-based design and governance in the world than we observe. The Argument from Evil is actually a contra-design argument: it is pointing out what ID predicts but fails to show up; whereas the actual availability and distribution of evils in the world is exactly as expected on MT: indifferent to any intelligent arrangement whatsoever, apart from entirely human intervention. But we will set that aside today (though it is relevant evidence: it has to go into e in any argument to design). Let’s more impersonally ask what we’d expect. Because when we do that, we still get very crucial observational differences.

For example, if MT caused FT, then we would expect to observe some peculiar things to a very high probability—indeed an arbitrarily high probability, as only Boltzmann effects could get a different result, at probabilities so small as to be beyond absurd, and thus not at all possible to expect. Because given MT (and “not ID”) life can only arise by a highly improbable chance accident, and therefore can only be expected if there has been a vast scale of random trials. And this entails we should expect to see a universe of vast size and vast age that is almost entirely lethal to life. Which is what we observe: the universe is dozens of billions of lightyears in size (at least), over a dozen billion years old (again, at least; as this is only the age of the locally observed portion of the cosmos), and almost entirely a lethal radiation-filled vacuum. Even “places to exist” here are almost entirely life-killing stars and black holes, while even places “not that” are almost entirely lifeless perches—frozen, irradiated or violent—incapable of habitation. The scant few places life even could arise are self-evidently the product of random mixing of variables, like a planet or moon’s distance from a star, its size, chemical composition, and the happenstances of its local astrophysical history—variables we indeed see scattered in random variation across the universe.

This randomness, and those indicators of randomness as a cause, are what we almost certainly will observe if MT is what caused FT. Whereas none of this is predicted or expected if ID is true. In fact, ID sooner predicts the opposite (a young, small, well-arranged, and highly hospitable universe). But even if you are too gullible to accept that, you cannot rationally deny the fact that this is not what ID predicts (you have to gerrymander ID to get this result, with a bunch of ad hoc emendations that lack any inherent probability)—but it is exactly, and peculiarly, what MT predicts (no ad hoc emendations needed). Hence Why the Fine Tuning Argument Proves God Does Not Exist.

Ultimately, you can’t escape the mathematical consequences of this observation with gerrymandering. You might want to fabricate a convoluted “just so” story to explain why your hypothesized designer wanted to engineer the universe to look, weirdly, exactly like a universe would be expected to look if no such designer existed, but that just moves the improbability of your hypothesis around inside the equation, it can’t get rid of it (see The Cost of Making Excuses). The posterior probability ends up the same: disfavoring ID as an explanation. God can just make universes work. Because Gods are magic. They aren’t constrained by local physics. So they don’t need FT. Whereas, without God, FT is the only kind of universe that can contain life. So, if ¬ID, then the only way life can ever observe itself existing is if FT, which means P(FT|Life&¬God) = 1. But because Gods don’t need FT to make universe’s life-hospitable, necessarily, P(FT|Life&God) < 1, and therefore P(FT|Life&God) < P(FT|Life&¬God). FT is therefore evidence against ID, and thus for MT; not the other way around.

Boyce and Swenson can only get the opposite result, first, by screwing up the probability of L on MT (with their erroneous nonsense about inverse gambler’s fallacies), and then by leaving out all the evidence actually pertinent to telling apart MT and ID as causes of FT. They are thus not only screwing up the math, they are also “rigging the evidence,” hiding Oz behind a curtain (“Do not look behind the curtain!”). Which is not legitimate. The result is that their conclusion simply does not apply to reality. It only applies to a fictional, non-existent world they have invented in their heads, one that doesn’t have all this differential evidence for MT and against ID as a cause of FT.

Indeed this is true even without MT: even on SU, FT is always evidence against ID. The most one can get is more evidence for ID, or a higher prior for ID, to counteract the dis-favorable evidence of FT; but theists never have that. Theists are thus stuck needing to establish ID despite FT being a manifestly weird choice by a Creator, making the universe look exactly like a universe would have to look if there was no Creator. Which is an embarrassing position for them to be in. And this was all proved years ago: see Elliott Sober, “The Design Argument” (2004), the latest version of which is now in The Design Argument (Cambridge University Press 2018); and Michael Ikeda and Bill Jefferys, “The Anthropic Principle Does Not Support Supernaturalism” (2006), an earlier version of which appeared in The Improbability of God (Prometheus 2006); all of which summarized in my article on Why the Fine Tuning Argument Proves God Does Not Exist.

Because of all this, MT is a much better explanation of the observed facts than both ID and SU.

Conclusion

There is in fact a lot of evidence supporting MT that does not support ID, even apart from what I mentioned here, as again I relate in Six Arguments That a Multiverse Is More Probable Than a God. It’s also possible to derive MT from an initial state of absolutely nothing (see The Problem with Nothing). Because not appreciating the power of randomness to inevitably generate order is a common failing among theists (see Three Common Confusions of Creationists and The Myth That Science Needs Christianity). Theists also tend to get everything wrong about the actual facts and what they really entail (see Justin Brierley on the Science of Existence).

This happens even in Boyce’s blog, when he says “the degree of fine-tuning required for the cosmological constant to have a life-permitting value has been estimated, for example, to be 1 part in 10^120!” That isn’t true. It’s being untrue is of no importance, since Boyce and Swenson don’t rely on this claim in their conference paper, and one can replace this mistake with real examples (like the value of the alpha constant). No one doubts apparent FT. But the cosmological constant isn’t an example of it. The figure Boyce cites is, in fact, “The discrepancy between theorized vacuum energy from quantum field theory and observed vacuum energy from cosmology” (emphasis mine). In other words, that number only exists because our theory of quantum mechanics fails, predicting a wildly different result. This means our theory is wrong. It does not mean anyone “tuned” the universe that far away from our theory. In practical fact, the vacuum pressure this constant measures can vary a great deal (by a factor of even a hundred or more) and still generate life (even more so when you allow other constants to vary). And even that constraint already depends on an untenable assumption: that you can change the average vacuum energy of a universe without changing anything else. That seems unlikely. But this gets into all the problems with actually nailing down what even has been tuned or could be, what is even possible or likely in the first place (see my discussions here and here).

The formal presentation of Boyce and Swenson avoids this problem by simply punting to other publications for the fact of FT. So getting FT wrong isn’t what’s wrong with their argument. What’s wrong with their argument is ineptly thinking that concluding the probability of FT on MT is high is an inverse gambler’s fallacy. It’s not. It’s a straightforward likelihood: given MT, there will be FT (to an arbitrarily high probability, and hence effectively a probability of 1). The question of whether FT is then evidence of MT depends on the probability of FT on ~MT, which we all agree is low on ~ID and which Boyce and Swenson insist is high on ID. But they are wrong about that. Since ID has no need of FT, and in fact using FT to get life is a really weird and unnecessary thing for ID, indeed wholly unexpected (as it makes the universe look exactly like a universe with no ID in it, an even stranger choice for ID), P(FT|MT) is logically necessarily always higher than P(FT|ID). FT is therefore always evidence for MT over ID.

This follows even if you build out a model of ID that also entails its own MT (as Boyce and Swenson consider), because FT remains unnecessary and peculiar even in that scenario. A God who made a bunch of universes would make them all look like wonderlands governed by his magical will; he would not bother with the callous, clunky, and absurdly bizarre tinkering of trivialities like the mass of the top quark. God has no need of quarks, much less maintaining hyper-specific masses for them. Only godless universes need these sorts of things, as without God, only these sorts of things can generate life. The existence of quarks alone is thus evidence against ID, in just the same way the Argument from Evil is. Hence what makes MT entail FT is the absence of ID.

You can test this with a hypothetical array: imagine all logically possible universes in the set of all universes lacking ID; how many observers in that set will observe themselves in an FT universe? All of them. Because in ~FT worlds, observers never arise (whereas they could, indeed even likely would, if ID is true). Thus, the probability of that observation (FT), on that condition (~ID), is 100%. That only leaves the question of the prior probability of FT on the assumption of ~ID, which is again high on MT, arguably low on ~MT (i.e. on SU). Sure, ID also has an excruciatingly low a prior (that’s A Hidden Fallacy in the Fine Tuning Argument). But that’s a separate tack. Here the point is: when you are using FT as evidence, and not trying to argue over its relative priors, then FT is always evidence against ID. Because it’s just always more likely to be observed on ~ID. This cannot be gotten around, not least by falsely pretending this is an inverse gambler’s fallacy.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading