Christopher Hitchens rightly said the argument from fine tuning is the best argument theists have, but only because it requires thought to figure out why it’s bullshit (whereas most Christian apologetics is obvious bullshit from the first moment you hear it). Because it is actually a really bad argument. Here I will explain this and then analyze a recent amateur attempt to get around it. The result is my most complete and definitive destruction of this argument to date.

That’s right. We’re nuking the site from orbit.

That the Fine-Tuning Argument Is Bad

A common problem with apologetics—or even just philosophy as a whole—is a reliance on bad arguments (see Formalized Gullibility as a Modern Christian Methodology and Which Is ‘Rational’: Atheism or Theism? on the one hand, and On Hosing Thought Experiments and Why Syllogisms Usually Suck on the other hand—and, as well, You Know They’re a [Good|Lousy] Philosopher If…). The Fine Tuning Argument (or FTA) is a textbook example of this.

All of which is why hardly any theoretical physicist today finds the fine-tuning argument in any way convincing. It’s simply not a scientific theory. It shows up in no peer-reviewed scientific study because it is, quite simply, pseudoscience. It is rather like climate science denial now: the rare expert still wearing this tinfoil hat is up against literally thousands of experts rolling their eyes at them (you can find a whole playlist about this on Phil Halper’s channel).

But the situation is even worse than that. If you compare the predictions of those two “it’s just luck” theories, the “natural luck” model makes predictions that bear out, while the “supernatural luck” model makes predictions that are falsified. That tells us which luck we got. If it was luck at all. Because multiverse theories require very simple starting conditions and thus vastly less luck, and therefore are the most probable fact of the matter if the evidence thus indicates natural origins over supernatural, as it all does. I already thoroughly cover all that, in respect to the false accusation that this is an inverse gambler’s fallacy, in Boyce and Swenson’s Theological Argument against Multiverse Theory. There are actually Six Arguments That a Multiverse Is More Probable Than a God. And that’s just one of them. There is, by contrast, not even one good argument for God being the explanation of anything, much less this. In fact all the evidence we have is better explained by there not being a God.

In reality Design is never the simplest nor inherently the best explanation of any observed order or coincidence, which I’ll just lump together as “order” (see Three Common Confusions of Creationists). Most examples of that are caused by inevitability (e.g. star and planet formation) or chance-plus-large-numbers (e.g. an individual being struck by lightning is an extraordinarily rare coincidence yet happens 100% of the time every year because there are so many people). Thus any argument of the form “order, therefore design” is automatically a bad argument. You need evidence for that theory over against the far more common causes of such an observation. And that means evidence other than the thing you are explaining (the order or coincidence at hand), because otherwise you are on a circular argument.

The fact is that prior probability always favors ¬design over design in as-yet-unconfirmed cases (which I’ll call ‘ayu-order’), owing to centuries of scientific discovery finding no other (see Naturalism Is Not an Axiom of the Sciences but a Conclusion of Them). But even had that not been the case, all else being equal, it would still be the case that P(¬design|ayu-order) >>> P(design|ayu-order). For example, even a self-evidently supernatural universe (like, say, a Taoist universe or Theurgical universe or Idealist universe) could behave in a similar way, whereby almost all order turns out to be inevitable or chance-plus-large-numbers outcomes, only in result of an inherent (not created or designed) supernatural physics (theists tend to forget they have to rule that out: see The God Impossible and Defining Naturalism).

Generally, there are many multiply-observed causes of observed uniformities to consider, and none of them are gods—who have never been observed to be the cause of anything, making them the least likely explanation of anything. And making it all worse, actual God theories are ridiculously convoluted and full of holes (see Theism, Naturalism, and Explanatory Power and Christianity Is a Conspiracy Theory, for example), making the odds against them worse than evidence and precedent already make them (see Misunderstanding the Burden of Proof).

In ensuing sections I’ll present more evidence and arguments establishing everything I just said above. But first above all is the fact that the evidence matches natural-luck, not divine-luck…

That the FTA Has Already Been Refuted

The FTA is fundamentally a hypothesis-comparison: either some natural cause produced what we call fine-tuning, or some divine cause did. If we can’t prove either theory is a priori more likely, then we have to look for observable clues, things that would be different on either explanation, as signs of which it actually was. And here everything has come out exactly as we’d expect if it was a natural cause and not divine. And that refutes the FTA. Because if two theories can explain “fine tuning” and one theory predicts A, B, C, and ¬D, ¬E, ¬F, while the other theory predicts D, E, F and ¬A, ¬B, ¬C, and lo, we observe A, B, C, and ¬D, ¬E, ¬F, then theory one has been confirmed and theory two falsified. And that’s that.

I summarized this in Why the Fine Tuning Argument Proves God Does Not Exist. But here is just a sample, from something I wrote long ago:

This universe is 99.99999 percent composed of lethal radiation-filled vacuum, and 99.99999 percent of all the material in the universe comprises stars and black holes on which nothing can ever live, and 99.99999 percent of all other material in the universe (all planets, moons, clouds, asteroids) is barren of life or even outright inhospitable to life. In other words, the universe we observe is extraordinarily inhospitable to life. Even what tiny inconsequential bits of it are at all hospitable are extremely inefficient at producing life—at all, but far more so intelligent life ….

And yet:

Without a God, life can only exist by chemical accident, [and] such a chemical accident will be exceedingly rare, and exceedingly rare things only commonly happen in vast universes where countless tries are made over vast spans of time. Likewise, a universe not designed for us will not look well suited to us but be almost entirely unsuited to us and we will survive only in a few tiny chance pockets of survivable space in it. Atheism thus predicts, with near 100% certainly, several bizarre features of the universe (it’s vast size and age and lethality to life [as well as our evolution from microorganisms, mind-brain physicalism, the moral indifference of nature, etc., none of which a God’s world would need, but every godless world would]), whereas we cannot deduce any of those features from [“God did it”].

Instead:

If God existed, we’d have pretty much something like what Aristotle or the Bible imagined: a fully inhabited cosmos, top to bottom, no larger or older than it needed to be, everything in existence and working together right out of the gate, Day One. Genesis would have been confirmed to be literally true by now. Or something the like. Space would be a breathable, inhabitable area, void of murderous radiation and meteoric missiles [or deadly stars]. People would already live there, as they will have done, like us down here, since the first instant of creation. The world would work exactly as needed, without hitch, simply by God’s will. There’d be no need of gravity. Things would just fall where he wanted. There would be no nuclear physics. Substances would just have the properties he wanted. There would be no electromagnetism. Light would just shine where he wanted, matter would just cohere as he wanted, and if he still wanted magnets, he’d just will them into existence and to work as he pleased. [So] heaven, or something comparably nice and fantastical, is simply where we’d live. God would have no need of making any other world.

Sure, you can make this go away by fabricating a bunch of epicycles to get God to weirdly want or have to make every aspect of the world look exactly like the world would have to look if no God existed, but as soon as you do that, you’ve made your God ridiculously improbable. You are simply refusing to accept what the evidence is telling you at that point and are trying to bullshit your way out of it. I shouldn’t have to explain why you’ve lost the argument the moment you do that.

Enter Bentham’s Bulldog

Bentham’s Bulldog (Matthew Adelstein) made a recent attempt to “fix” all this. And it doesn’t work. I already refuted the gist of Bentham’s case as represented in Ross Douthat’s Worst Argument for God, but Douthat (unlike Adelstein) is an idiot, so we shouldn’t judge Adelstein by Douthat’s representation of him. So I’ll analyze Adelstein’s omnibus article on this: The Fine-Tuning Argument Simply Works.

I’ve dealt with Adelstein as Bentham before (you can find a lengthy thread here). He is very smart and writes well. And he is more honest than most apologists. For example, he often admits when arguments are bad, and doesn’t resort to lying to get out of a pickle (preferring tinfoil-hat solutions instead). He’s also not a conservative nor even religious (he believes Christianity is the religion most likely to be true but still not likely to be true). But he’s not very good at this. He makes a lot of mistakes in math, logic, and science. Yet never corrects himself when caught. He is prone to generating ever-increasing rambling-and-vacuous word-walls instead. And when cornered he gets childish, acting like a chest-pumping corner boy (this may be due to his possibly being a teenager or close-to, and having no professional training or credentials).

In his omnibus article he explains the argument, then attempts to recover it from several common objections (which I’ll explain as we go):

As we go through these, I will refer back a lot to the points I just made and linked in the first two sections:

  • (1) Begging the Question (erroneously assuming there being a god entails less luck than alternatives);
  • (2) Unevidenced Premises (making claims about the facts that we actually don’t know are true);
  • (3) Impossible Math (making claims about a probability that no math exists to establish);
  • (4) Evidential Failure (the observed evidence matches what natural cosmologies predict and contradict what theistic cosmologies predict);
  • (5) Circular Argument (presuming order entails design in order to argue some observed order is evidence for design, i.e. using the thing to be explained as evidence of that explanation rather than having any other evidence it is that rather than something else);
  • And (6) Violating Priors (ignoring the fact that background evidence establishes that order almost always has a natural explanation, in either direct or statistical inevitability, so absent evidence for anything else, that is probably the explanation of anything).

In Bayesian terms, the Fine Tuning Argument (or FTA) fails on priors (Point 6) and likelihoods (Point 4). And it cannot be rescued from these failures: Points 1 and 3 block any effort to reverse the prior probability distribution; and Points 2 and 5 block any effort to reverse the likelihood distribution. And that’s that. “On All Evidence” the observation of Fine Tuning argues against God, because (1) all background knowledge establishes the explanation of anything is usually not going to turn out to be a god and (2) all remaining evidence is what is expected if fine tuning was not caused by something intelligent, and not what is expected if it was. I already demonstrated this in the previous sections of this article, and as supported by the articles linked therein. So in what follows all we have to do is point out where Adelstein runs afoul of all that. Because that all refutes everything he says already.

There Is No Fine Tuning?

Here Adelstein generally doesn’t know what he is talking about. And this failure kind of unravels the entire rest of his article.

For example, when describing the FTA, he still thinks “the cosmological constant” was “tuned” to a specificity of 1 in 10^120. This is so wrong it’s hard to decide where to begin. He has mistaken the cosmological constant with a completely different thing: a measured incongruity between quantum theory and observation that demonstrates there is something wrong about the theory (see my previous discussion here). It does not have anything to do with which values are life-producing; rather, that is solely a measure of theory error. The actual cosmological constant (Λ) is near zero and can vary by quite a lot and still produce viable universes (it is in fact not at an optimal setting for life; it’s just “within the range” for permitting life). Moreover, outside-values for Λ could lead to resets that tame the value, through big crunches or Penrose states (or inevitable Carroll-Chen states or He-Gao-Cai states or chaotic inflation or the like) producing new randomized Big Bangs (either over vast spans of time, or the dice get rolled again every time a black hole forms). So even inhospitable values will plausibly produce universes with hospitable values eventually anyway. Indeed, in black hole cosmology, an inhospitable value is impossible (because all universes then have tolerable expansion rates). And in coupling theory and void theory and other models, there is no cosmological constant to tune.

So it’s hard to even define what a “life preventing” value for Λ even is. But one thing we do know for sure is that it does not have to be tuned to 1 part 10^120. Anyone who says that does not know what they are talking about. In actual fact Λ can vary by over 10 orders of magnitude—and ours is even ten times smaller than the optimal value. Which gets to the point of this section of Adelstein’s article: he wants to argue that “there is no fine tuning” is not a viable objection to the FTA, yet he does not understand anything I just discussed here—but you cannot understand that objection without understanding such details as those. Is there even a cosmological constant, or is it really zero and expansion caused by something else? And if it does exist (which we do not really know), can the cosmological constant even be tuned? What values for it are even possible? And what probability do those different values have? Are larger values increasingly unlikely because that entails an ever-increasing budget of energy? And what eventually happens to that constant in collapsed or runaway universes anyway? These are not questions science has answered. So you don’t get to use it as a premise.

Similar problems arise with every “constant” (see my discussion in Three Common Confusions of Creationists). Even the rest masses of the various subatomic particles: we actually don’t know why they have specifically those masses. But odds are, something is causing that to be the case—likely some property of the geometry of spacetime. And that might turn out to be untunable. Or it is caused by other constants (and thus cannot vary in respect to them). Or it may only be tunable within certain probability distributions that make life-bearing worlds inevitable, either altogether or eventually (as an initial unstable universe can keep cycling randomly until a stable universe arises). We really don’t know. And things we do not know cannot be premises in arguments—unless you admit the conclusion of that argument is as unknown as the premises. Which of course eliminates the FTA.

This is why the FTA collapses under analysis. Absent determining knowledge that would establish its premises, we are left with the epistemic Bayesian analysis of priors and likelihoods—and all the evidence establishes that those do not favor God as the conclusion.

There are other signs that Adelstein doesn’t know what he’s talking about here, like that he thinks “the Pauli exclusion principle” also “is indicative of fine-tuning.” But the Pauli principle is binary: it is either on or off. At random distribution it would be 50/50. That is not fine tuning. That’s coarse tuning. And again, there is no reason to believe this has been distributed at random. It is likely a causal product of more fundamental things, which may well be inevitable (either immediately or eventually) or have probability distributions favoring habitable outcomes. We simply don’t know. This touches on the “Deeper Laws” objection he gets to later, but here the point is more fundamental: to know whether fine tuning even exists, we first need to know what actually can be tuned, how it is tuned, and what the probabilities are of different values, and we don’t know these things. As I’ve pointed out before, several constants (like the gravitational constant, Planck’s constant, or the speed of light) appear to be incapable of being other than they are. What if all constants are like that? Can we claim to know they aren’t? Well, no.

Likewise, Adelstein naively trusts the claim that “Penrose” somehow proved the universe was tuned to “1 part in 10^10^123 of the available values.” But evidently he did not check what that calculation actually was: Penrose simply ran a “back of napkin” estimate for the probability that if we ran the universe backwards from its present state that it would end up perfectly resolved at his imagined “single state” Big Bang. This figure has exactly nothing to do with how many of his calculated 10^10^123 states would, if selected as the starting point, produce life (Penrose isn’t even asking, much less answering, that question). In fact the entropy at the Big Bang was not anywhere near as low as Penrose’s calculation assumes, nor do many of his other assumptions hold anymore (this napkin math is thirty years obsolete).

But such a result is completely useless for fine tuning arguments anyway, since none of this has anything to do with the probability of a selected value producing life. The starting entropy of the Big Bang could be literally anything, and it would still produce life. Because any process leading to life follows from the entropy increase upon subsequent expansion. And entropy can always increase. So you could select any of the “states” in Penrose’s 10^10^123. Each one would describe some point in our universe’s past up to now. And we can empirically see every single one is life-conducive. Penrose knew that. That’s why he never claimed this had anything to do with an improbability of life. Christian apologists (perhaps of dubious honor) are the ones who fabricated that myth by taking what he said out of context. Like creationists tend to do. And the FTA is just another Creationist bromide. It’s not a scientific theory.

Generally entropy arguments don’t work at all for the FTA, because the reason the initial entropy of the Big Bang was even as small as it was (and it was not very small) is that all the available energy was confined to a small space—not because it was all highly ordered in that state. It was not. That’s why the resulting universe has evolved so randomly and is (at scale) isotropic in all directions: the distribution of energy in the Big Bang was random, not ordered. The “increase” in entropy since has played out in expansion and cooling, the one causing the other, and all an inevitable outcome of the initial pressure created by the concentration of energy in so small a space to begin with—which concentration has countless plausible causes in real cosmological theories that require no special “fine tuning” (again, Carroll-Chen and He-Gao-Cai are merely two examples; the eternal inflation models of Guth and Vilenkin are two more; likewise Penrose Cosmology, Black Hole Cosmology, and so on). So entropy required no tuning either.

Generally Adelstein logically conflates the fact that there have been many proposed or apparent fine-tuning examples with our “knowing” those are examples of fine-tuning. This is an equivocation fallacy. No mainstream cosmological physicist thinks we know for sure that “anything” was finely tuned. That’s why no such thing as the FTA exists in the scientific literature. They can list to you possible candidates for FT, but if you query them, and they’re honest, they’ll tell you: we don’t really know why those things have the values they do, what possible values they can have, what probability distributions exist for those values, or whether those values can’t have evolved from prior universes with different values. And this is what people mean by the “No Fine Tuning” objection.

Adelstein has naively confused the analytically correct point that “We Do Not Actually Know Fine Tuning Exists” with the claim that “No Fine Tuning Exists,” and then argues we don’t know that, and thus fails to get that the original point was exactly the same as his: we don’t know that. This is a common amateur mistake: to confuse an argument against a premise being known to be true with an argument that the premise is false. But knowing the premise is false and not knowing it is true are two different things. Yet both destroy an argument. Because the epistemic condition of the premises commutes to the conclusion. So if you do not know a premise is true, then you do not know the conclusion is true. This is the actual argument Adelstein is supposed to be responding to here. But he didn’t get it. So he failed to answer it. And really, he can’t. Because we don’t know the things he needs us to know.

What If You Change a Bunch of Constants?

Here Adelstein has made the mistake of trusting Luke Barnes, who is a crank (he is a published and credentialed physicist, but a tinfoil-hat philosopher: see my exposé in Barnes Still Not Listening). Still, here I’ll just assess what Adelstein says, not worrying whether Barnes would say he got him wrong.

Adelstein doesn’t say much that can be described as intelligible here. And I think that’s because he doesn’t understand the argument he is supposed to be responding to. He thinks he can dispatch this objection by declaring “we simply have been able to vary multiple different constants and see that no life would arise,” but the problem is: that’s not true. Adelstein cites a position paper by Barnes that has an appendix on the late Victor Stenger’s old Monkey God program, which only identifies problems rendering Stenger’s model incomplete. Barnes does not actually build or run a correct program. This is the same error as above. Conflating “Barnes proved Stenger’s model was flawed” with “Barnes proved the results of Stenger’s model false” is the same thing as conflating “we don’t know Stenger’s results are true” with “we know Stenger’s results are false.” And that’s a fallacy.

But Adelstein never even understands what Stenger’s model was doing. So he doesn’t seem to understand the objection it is representing. Adelstein instead makes illogical assertions like “no matter how many very different ways there are for life to arise, those don’t affect the probability that life like ours would arise.” That is literally false (it straightforwardly contradicts the Law of Cumulative Probability). It’s like saying “no matter how many different hands could win me this round of poker, those don’t affect the probability that I will win this hand.” Obviously the probability of winning is directly proportional to (and thus modulated by) how many ways there are to win.

So I have no idea what Adelstein even thinks he is saying here. More inexplicably, Adelstein claims to derive that illogical statement from an impertinent analogy about the prior probability of two different thieves’ guilt (one who needed to guess a gate code and one who could get in without it). But that has no relevance to his statement. His analogy did not measure “how many ways to get in.” That would mean the thief with multiple ways to get in is the analog to the higher habitability result of multiple ways to get life, not the other way around (and thus his own analogy would vindicate rather than challenge the Many Constants objection). The “how many ways to get in” analogy also seems to confuse the Many Constants and Kinds of Life objections (see the next section).

So I’ll have to explain the actual objection he is supposed to be responding to here. The FTA is almost always argued by holding all constants fixed, changing only one, and observing how many settings are habitable. But that is a fallacy. The “multiple constants” objection simply points out that that is not how constant selection will ever have operated. Whether constant selection is even possible or how it would work is the first objection we already dealt with above, but for the purposes here let’s just assume we’ve granted that several constants can vary, and that the distribution of their values is completely independent and limitless and perfectly randomly distributed (though remember, we do not know any one of those things is true). But if there are a “bunch” of such constants, then the number of “habitable outcomes” is not “one” (the single range of values of a single constant when all the other constants are fixed) but the sum of all habitable conjunctions.

So, for instance, if the strength of gravity (G) can vary (even though, again, evidence suggests that it cannot) and the strength of the electromagnetic force (𝛼) can vary (even though, again, maybe it can’t), but we hold G at its current value and ask how many values of 𝛼 will produce a habitable world, there might only be one small habitability zone (something close to ours, the rest making either atoms or planets impossible–although, maybe that zone is not even small). But if we allow both to vary, then in fact there are infinitely many congruences of G and 𝛼 that will be habitable. If 𝛼 gets doubled, then you get a habitable world when G is doubled, and so on. For every 𝛼 there is a G to pair with it that creates a habitable universe. And since they do not have to be exactly the same (G and 𝛼 can each vary a little bit and still work), this is not an infinite array of exact value pairs, but an ever increasing volume of compatible values. And this problem multiplies. Because it isn’t the case that there “logically necessarily” is only G and 𝛼. Not only are there many other constants (like the mass of the electron or the strength of the weak force or the strong force, both of which can actually vary by a lot and still produce habitable spaces), there are infinitely many other possible constants that, in our world, are set to zero, but that, if they had a nonzero value, might offset or replace the effects of changing either G or 𝛼 (or anything else). And the infinite array of infinitely possible values for all possible constants will have an incalculable frequency of habitable combinations.

This gets to a later objection (the math here simply does not exist to carry any premise in the FTA). But set that aside for now. The point of the Multiple Variables objection is that the FTA requires knowing how many of all possible combinations of all possible constants produce or lead to habitable worlds. Stenger showed that within a certain confined space of options, quite a lot of the possibility-space becomes habitable, and so the FTA loses the “fine” aspect in its “tuning” premise (this is a pervasive problem: remember how Λ can vary by 10 orders of magnitude and still work out; every constant may have similar leeway—especially when the other constants are also allowed to vary, and also have such leeway). Barnes showed that Stenger’s possibility space was arbitrary so his argument does not carry. But the question it raised has not in fact been resolved. Particularly when you consider reset outcomes. For example, high G (or, low 𝛼) results in collapse to a singularity (or whatever equivalent) in which the values of G and 𝛼 get randomly reset (since they have returned to the same state that randomized them in the first place) producing a new universe (in the resulting mix-and-bounce)—and this keeps going until you get a compatible pairing that doesn’t collapse (one way How the New Wong-Hazen Proposal Refutes Theism). And all of this only gets worse when we remember that we also have to account for completely alien physics, and not just variations of ours (a point I’ll get to later).

So the question is: what if you allow all possible constants to vary randomly at the same time? The answer is the same as before: we literally do not know. We do not know how many constants are possible. We do not know what sets them. We do not know how all their possible settings are distributed (e.g. can they vary freely as the FTA assumes, or only around a bell curve favoring certain values over others?) or even what values are possible (can any constant “infinitely” vary or are there diminishing limits to how large a constant can get?). We don’t even know which constants are independent of each other (if we changed G, would that also change 𝛼? If we changed 𝛼, would that change the masses of the fundamental particles? And so on) or, if they are not independent, what their dependency relations are (if changing G changes 𝛼, what does 𝛼 become for each value of G?). In other words, we literally do not know what we need to know in order to claim we know the central premise of the FTA is true: that the frequency of habitable worlds produced by all combinations of all possible constants is low (much less extremely low).

So the FTA stands on an unknown. Which means its conclusion is unknown. And that’s the end of the FTA. Adelstein has no response to this.

Couldn’t Different Kinds of Life Arise?

This objection amounts to the other side of the equation: if there are more ways to “win” then the frequency of winning might not be low—again eliminating the “fine” aspect of the “tuning” premise of the FTA. To my mind this objection only kills naive versions of the FTA, which conflate “exactly our world” with “just any world that could accumulate outcomes all the way to self-modeling nervous systems” no matter what kind of matter they were made of. So, for instance, we have to count worlds that don’t even have protons and electrons or even gravity or electromagnetism, but entirely alien substances and forces. Or that have gravity and EM, but such alien matter that how they interact is different than in our world. And so on.

I personally prefer to steel man, and the steel manned FTA does not get tripped up on this. A good FTA only asks what combinations of constants are needed to get any kind of sustained structure—no matter how weird or different from ours. This usually means looking for combinations that produce stars capable of generating heavier elements (or whatever equivalent—since we also have to count up worlds with alien particles, constants, and forces) and thus some version of planets and polymers, which then stick around long enough to give life a chance to arise. If that is what you are calculating, and you are doing that informedly, you might run afoul of one or more of the other objections here, but you won’t run afoul of this one.

Adelstein sort of almost gets this when he points out that “if the cosmological constant weren’t extremely finely-tuned, for instance, stuff just wouldn’t hang together.” He’s wrong about the cosmological constant (as just noted, it requires no “extreme” tuning at all, large values might be impossible or less likely, and any outlier values may lead to reset states anyway and thus do not forestall eventual habitable values) but he’s right about the vague point that, at the very least, “you need stuff to hang together.” So there are ways of getting the FTA past this hurdle. Adelstein just doesn’t seem to know what they are.

Instead Adelstein seems to dismiss this problem with illogical or impertinent declarations. Like his undefended intuition that “it would still be miraculous that we have laws that produce anything interesting,” which is a non sequitur. In fact randomization by definition always generates interesting things. Indeed many interesting things are inevitable and thus will exist in all possible universes. Think of all the really interesting laws and structure that logically necessarily exist in geometry and number theory, from mathematical coincidences to the strong law of small numbers, the twin prime effect, euler’s constant, Pythagoras’s Law, and so much else, including a vast universe of “finely tuned” mathematical constants that can’t have any other value than they do and thus were never tuned at all (which all matters because our physics might actually reduce to these kinds of facts). But even for the rest it’s just a question of quantity and time. Think of a poker deck shuffled 10^80 times. Every interesting combination—in fact every hand in every possible poker-deck-based card-game will appear. Without remainder. If a system produces all possible outcomes (and all random systems eventually do if given time), then they will produce all interesting outcomes. That is not miraculous. And yet even given a finite run, a long enough time will get you a lot of interesting outcomes. Adelstein even contradicts himself by admitting this when he gets to rambling about “Boltzmann brains” later on, a perfect example of my point, but I’ll get to that in the final section.

People tend to forget how powerful disorder actually is (see my discussion of the Argument from Uniformities for some examples). When Adelstein complains that “if the universe hadn’t been in a low entropy state, everything would be random disordered chaos,” he seems not to understand what entropy is (as I noted earlier, it is not a measure of “order” but of available states, and didn’t need to start low), but more to the present point, he doesn’t realize that if you let a random disordered chaos sit around long enough, the probability of a completely random quantum tunneling event resulting in a Big Bang approaches 100% even on presently known physics. So if time goes on forever, there will be endlessly many Big Bangs. This is why Adelstein’s mockery of multiverse theory is naive. He’s not thinking through the math here.

Likewise when he complains “everything would have quickly collapsed into black holes,” evidently not remembering that that is a reset: a new Big Bang may result, with the constants rejumbled—or may do so eventually, because black holes are time-inverted Big Bangs, so they enjoy all the conditions ripe for starting new inflation events, and it’s just a matter of time before random mixing and quantum tunneling hits the required jackpot to do that. And if all these things go on forever, all possibilities will get explored eventually. Black holes take extremely long times to decay, but then their decay-products will eventually form new black holes, and on and on. So the mixing and thus the chances go on forever. Like an eternally reshuffled poker deck. There may be limits (configurations that become increasingly unlikely or impossible over time). But we don’t actually know what they are. So we can’t claim to know what they are.

Similarly, when Adelstein complains “if life could arise under tons of different conditions, it’s unlikely that it would arise under these conditions,” he seems confused about the math again, as that sentence makes no sense in context—it sounds like a Configuration of the Stars fallacy. The very objection he is supposed to be responding to is a refutation of exactly this sentence. So why he thinks it responds to that objection escapes me. I can only assume he does not understand the objection. The whole point is that it is a fallacy to assume we are asking how improbable it is that precisely we exist, when in fact we are asking how improbable it is that any people exist.

Because if the probability of “some” people existing is 100%, then the probability is 100% some people will exist. Therefore the “specific” probability that it would be us rather than, say, ectoplasmic interdimensional squid people living inside a star, or the civilization of an intelligent subterranean moon fungus, or microscopic force-wielding midichlorians, is not relevant to the FTA. Adelstein’s mistake here is like arguing no one can have won a lottery because the probability of winning is so low—but in fact that is to confuse the probability of a specific person winning with the probability of someone winning, which (in lotteries today) is near 100% (the same mistake I clocked him making in the last section).

The bottom line here is that the “many ways to win” objection can be avoided by a disciplined FTA; it just can’t avoid any of the other objections. But Adelstein does not seem to understand how to make that point or even that it is the point he needs to make. It’s sort of almost there, but it’s never defended, and then buried in a bunch of illogical non sequiturs instead. Which only demonstrates that he is not competent to assess the success of the FTA in the first place, and thus should not be allowing himself to be convinced by it.

Can’t Do the Math?

This is a bigger problem. But again Adelstein doesn’t understand what it is.

Adelstein thinks this is about claiming “there isn’t some machine that determines the laws of physics by rolling dice.” But that’s not the mathematical objection. That’s just the first objection: that we do not know anything even is tuned or how it can be tuned or what probability distribution the variables have. Because it isn’t obvious there’s some machine that determines the laws of physics by rolling dice. The constants are almost certainly caused by something, and thus fixed by those causes. I show how we already know this to be the case for almost all constants and even many fundamental constants in Three Common Confusions of Creationists. But even if there is “some randomizing machine,” it’s most likely Big Bang events themselves (which means resets are always mulligans), and those have a tendency to always result from any chaos (by collapse-and-bounce or sit-around-rolling-dice-for-eons) and thus we get endless randomized universes. But that’s the multiverse objection, so I’ll set that aside until later.

Here Adelstein is supposed to be addressing the mathematical objection. But he never does. He goes on confusedly about things like “objective chances and subjective chances” and “hypothetical dead alien builders” that have no relevance to the objection. The mathematical objection is not the “we don’t know the premise is true” objection (we already covered that). The mathematical objection is that we literally have no mathematical tool to even get a subjective (epistemic) probability. This was demonstrated by McGrew, McGrew & Vestrup (and physicists have made a different but comparable point with respect to the measure problem in cosmology, where you can think of this as like the three body problem on steroids). When we are faced with infinitely many possible constants, each with infinitely many possible values, and with infinitely many possible probability distributions, there is simply no way to calculate any frequency within that permutation space. It just can’t be done.

This relates to the folly of holding all constants the same and only changing one to see what happens: that is useless, because there are infinitely many constants (most currently dialed to zero) that can all change together, so the single-constant test does not determine anything. You need to find out how many of all those infinitely many combinations will work. And that’s currently impossible mathematically. To illustrate the problem, tell me the epistemic probability of a universe forming that is made of “borlions,” which are like electrons but 32.196% heavier and they swap charges and behave just like atoms when they collide at certain energies, and the chemical properties of the resulting atoms are based on the number of borlions in orbit and not in the nucleus, resulting in a whole alien periodic table. And then tell me the probability of every other logically possible universe. And then add up all the ones that produce enough stable structure to allow life (including ones that explore multiple universe assemblies sequentially or in parallel)—and then tell me how many there are. Good luck with that.

So this isn’t about not knowing the objective probabilities. The math won’t work even for subjective probabilities. We simply cannot simulate “infinite” combinations of constants with “infinite” different possible probability distributions producing an “infinite” number of different physics.

Which returns us to the central problem: none of the premises of the FTA can be known to be true. Therefore neither can its conclusion. Here the problem is mathematical. In the “no tuning” objection the problem is empirical. In the “bunch of constants” and “many ways to win” objections the problem is both. Adelstein has done nothing to remove any of these problems. Humanity still does not know any of the facts he needs us to know. And he still does not have any of the mathematical tools he needs to use. So no premise in the FTA can be established as known.

What about the Anthropic Principle?

This is the most important argument against the FTA, so it is sad to see Adelstein not understand it at all. The naive anthropic objection is that we would always observe ourselves in a finely tuned universe, so our observing that can’t be assumed improbable on any particular theory, because the probability of observing that is the same on all possible theories. Fine-tuning therefore has zero Bayesian value as evidence. The likelihood ratio is always 1 and thus fails to prefer any theory over another. That’s the usual anthropic objection. It’s slightly incorrect—but in exactly the opposite sense Adelstein needs:

It is only true that “if we are here from a Natural cause, then the probability we would observe fine tuning is 100%, because there are no other conditions on which we would exist to make the observation.” Literally P(¬FT|N + observation) = 0 therefore P(FT|N + observation) = 1, by law of converse probability. But it is not true that “if we are here from God, then the probability we would observe fine tuning is 100%, because there are no other conditions on which we would exist to make the observation.” Because God, unlike Naturalism, does not need (and indeed would have no reason to prefer) fine tuning to make a habitable world. As I already explained above. Though I have covered this extensively before (from both the Naturalist and Godist perspective) and this is a multiply peer-reviewed conclusion now. But in short: P(¬FT|G + observation) > 0 therefore P(FT|G + observation) < 1, by law of converse probability. And if P(FT|N + observation) = 1 and P(FT|G + observation) < 1, then it is logically necessarily the case that P(FT|N + observation) > P(FT|G + observation). Observing fine tuning is therefore always evidence against God.

There is no way around this. The likelihoods cannot come out any other way. That is literally a logical impossibility. So what one might then try is to return the argument to the priors, and say that P(N) < 1 and P(G) > 0 and therefore P(G) > P(N). But you might notice that is mathematically invalid. It does not follow that if P(N) < 1 and P(G) > 0 that P(G) > P(N). The problem here is that there is no evidence “God” is any more probable than “Fine Tuning.” It’s luck all around. And the amount of luck is indeterminable in both cases. So there is no way to carry that argument. It’s actually worse than that, since FT only gets to low probabilities at all with premises not known to be true, whereas we can get G to low probabilities with premises known to be probably or even necessarily true. So we have an empirical and analytical Bayesian case that P(N) > P(G), not P(G) > P(N). And if P(N) > P(G) and P(FT|N) > P(FT|G), then logically necessarily, P(N|FT) > P(G|FT). FT is evidence against God. And the FTA is solidly refuted. This is the informed anthropic objection.

There are also many more observations that match the predictions of N but not G, even further stretching the likelihood ratio away from G and toward N. As I already described in section two. But it’s already skewed toward N on fine tuning alone. Because N requires it. G does not. And that’s that.

Adelstein neither understands this is the argument he is supposed to be answering here, nor answers it. He makes false declarations instead:

  • “I think it’s falsified by the examples I gave in the last section” is false. None of the examples he gave answer any part of the anthropic argument as I just explained it.
  • “It’s just a nonsequitur—the fact we wouldn’t have been around to observe evidence if things had turned out differently doesn’t mean that things turning out one way can’t be evidence” is itself the non sequitur. It is indeed evidence—against God, as just mathematically proved. But even in the naive argument, where FT is equally likely on N and G given observers, still “things turning out one way can’t be evidence,” because evidence requires different likelihoods, and on naive anthropism, the likelihood of FT|observation is identical on G and N (while on informed anthropism, it’s smaller on G).
  • And as for his claim that “it’s obviously absurd in lots of cases. It implies that your existence isn’t evidence that your parents had sex or didn’t use effective contraception. It implies that if a man is fired at by 500 people and survives, that’s not evidence of a conspiracy—for if he hadn’t survived, he wouldn’t be around to wonder about it,” these are all false analogies. Because they all depend on knowing things about the world that we do not know in regards to FT. That’s precisely the problem.

That last argument I dispatch using a mechanized machinegun in The End of Christianity (pp. 297, 412n33) to eliminate Adelstein’s question-begging use of sentient aimers of guns. But there the problem really comes down to priors: how improbable is a God vs. a natural cause cosmology?

And the fact is: we have no evidence that God is in any way more likely than even single-universe low-probability fine-tuning (the Hidden Fallacy in the Fine Tuning Argument), much less all the higher probability natural-cause alternatives (like sequential and parallel multiverse theories that inevitably follow from simple initial quantum physics). Otherwise, when two causal theories start equally likely (like fine tuning a universe vs. fine tuning a God), then “improbable survival” cases do not evince one over the other, since even at best both will produce the effect 100% of the time, and at worst natural causes will do so more often.

For example, on G (God), there is a probability of waking up in heaven (or hell) after the firing squad shoots, an observation with near-zero probability on N (Natural causes). Which means even in the firing squad case (though that’s better constructed with a mindless loose machinegun spraying about at random), P(¬alive|G + FS + observation) > 0 (since there is a nonzero probability of observing yourself in the afterlife), therefore P(alive|G + FS + observation) < 1, by law of converse probability, but P(alive|N + FS + observation) remains essentially 1 by the same reasoning. So P(alive|N + FS + observation) > P(alive|G + FS + observation). So the fact that they all missed more likely had a natural cause if you aren’t observing yourself in the afterlife—whereas if you are, then P(G|FS + observation) > P(N|FS + observation) and you’ve finally gotten some evidence God exists (though not decisively, because there are non-god afterlife hypotheses to rule out first).

So, in the end, there is no way around this. The FTA fails here. And Adelstein does nothing to recover it. He is instead leaning on all the fallacies I enumerated at the start. We don’t know having a God is more likely than any form of natural fine tuning, and we don’t know natural fine tuning exists or is even improbable. The evidence we have suggests, rather, it’s the other way around: background knowledge ups the prior for natural, not divine, causes; and all observations match natural, not divine, causes.

What About Deeper Laws?

This is a subset of the No Tuning objection. And again Adelstein fails to understand it. He objects with a “maybe deeper laws rig poker games” analogy, but that is another false analogy, because his analogy presumes precisely the knowledge we don’t have in the FTA. That physics rigging decks is weird is something we have lots of data to support, and thus those kinds of effects we have cause to doubt. But that physics fixes the fundamental constants is not weird. It is actually precisely what we have seen with every other physical constant to date. So the background knowledge is reversed here. And we don’t have any evidence that would call that into question, unlike with insisting there are magical poker decks.

But the real problem here is the Hidden Fallacy problem: what is actually going on here is that one side is proposing a wildly ridiculous theory (God) while the other is proposing a much simpler and well-precedented one (that some underlying natural cause is responsible for the way things are). These are actually just straightforwardly competing theories: “I say it is x, you say it is y.” So is there any evidence for or against either x or y? There’s lots of evidence against God and none for. Whereas there’s lots of evidence for physics explaining things (including constants like the boiling point of water or refraction indexes or inverse square laws, and all the laws of thermodynamics and hydrostatics and pretty much everything else). And that’s that. God is simply a bad theory compared to “deeper laws” theories. The latter are abundantly precedented and have low Kolmogorov complexities. So why would we prefer the former? It’s like preferring “gremlins” explanations for a mysterious plane crash over standard natural causes. Why would you do that?

This gets to the heart of what’s wrong with the FTA. They want you to presume the observation is improbable unless intelligence is involved; but we well know lots of seemingly improbable observations are not improbable at all but in fact inevitable once the actual underlying physics and facts are understood (examples, examples, examples, examples, examples). So inevitability is a viable theory. There is no good reason to prefer theism over it. “But you don’t know” is an argument from ignorance. “But that’s unlikely” is begging the question. We can say the same things of the much more convoluted and unprecedented “God” theory. You don’t know that’s true. You don’t even know its likely. You really don’t even know it’s plausible. But we do know physical-laws explanations are plausible, and by centuries of uniform precedent, always more probable.

We even have such theories—so we aren’t even banking on an unknown. Our theories might be incorrect, but they do exist. Unlike God theories, which don’t. God is just handwaving. There is no model by which you can go from “God” to predicting any observed parameters of the cosmos, much less an accelerating expanse billions of lightyears large and old that is almost entirely a radiation filled vacuum—even less to pre-cellular biogenesis followed by an eons-long evolution of species, or indeed almost any other general fact, which all instead follow from existing natural cause theories on the table. Theism is hopelessly underperforming as a fundamental theory here.

So this all comes down to why we should prefer Your Creator to Our Creator. Yours seems completely unprecedented, implausibly convoluted, entirely unevidenced, and incapable of making any useful theoretical predictions. While ours is fully precedented, extremely simple, reasonably evidenced, and capable of making useful theoretical predictions—many of which have already come true. The question is why you think “God” is more probable than “Laws of Physics.” You can’t circularly appeal to Fine Tuning, because that is precisely the thing we are each proposing a Whatsit to explain. So we cannot appeal to the thing to be explained as evidence for one theory over the other. If my theory predicts the observation, and your theory predicts the observation, the observation is no longer evidence for either theory over the other. It then comes down to priors, and there’s just no way to get “God” to be more a priori likely than “a minimal quantum vacuum” or any other ultra-simple cosmology, scientific or philosophical.

It’s just all the worse that when we look at all the other evidence, it matches the predictions of my theory, not yours. So we win on priors and likelihoods.

Is This Just a Stalking Horse?

This objection is the very point I just made. And here Adelstein finally describes something correctly: “if the theist gets to add to their hypothesis that there’s a mind disposed to finely-tune the universe, then the naturalist can add some auxiliary hypothesis to explain the data,” too. We both get to play the same game. So there is no way to win at that game. We should thus stop playing. When we do that, we end up back at priors and likelihoods, and those both favor Natural causes over Gods, as already explained.

It’s worse than that, of course, because not only are we playing the same game (so even at best the FTA is a stalemate), but the theist has to cheat at it, bringing in more (and more improbable) suppositions than we are allowed to do in turn (indicating the FTA is a rotten barrel). And that’s the problem. They have to invent way more epicycles and ad hoc suppositions than we do to get the same result (the Problem with Making Excuses). This is why Adelstein’s examples here are all false analogies: he invents scenarios in which the atheist would have to cheat (just as I have elsewhere done); but the problem is that that is not the scenario we are in. We are in the scenario where the theist is the one who has to cheat (remember my point about inventing epicycles to evade the evidence?). We should believe what that entails. Whoever has to cheat is by definition the one who is wrong.

Adelstein then collapses into traditional bullshit apologetics that I won’t waste much time on.

  • No, God is not theoretically simple, that’s an apologetic fabrication.
  • Inventing a bunch of motives for God is precisely the kind of epicycle-building atheists don’t need to do. Remember the stalking horse: theists have to cheat in this game. Atheists, by contrast, don’t. They don’t even have to arbitrarily assume, like the believer does, that the Creator wants to make life at all, much less specifically physical life, much less all the other weird things they have to assume their God wants or doesn’t. Theism is rife with ad hoc assumptions—about the physics of God, the existence of God, the powers and capabilities of God, and the desires and plans of God. That’s far more assumption-making than any plausible natural-cause cosmological theories. Even in science. But especially in philosophy.
  • Any kind of supernaturalism is a contrafactual presumption, while naturalism is an evidence-based conclusion. There is no way to make that not be the case on present evidence.
  • And so on. You don’t need me to bore you with more Religion Makes Shit Up 101.

I’ve already linked above to all the articles establishing that all the evidence renders God hopelessly convoluted, bizarre, and unprecedented, but quantum cosmologies relatively simple, straightforward, and based on centuries of accumulated precedent. Hell, we can get our universe with theories so simple they contain almost zero information at their initial condition. There is no way to claim those are logically impossible. So we get to make them up same as you. And yet they most definitely have far fewer theoretical components than God, rest on nothing bizarre or convoluted, and (unlike any God cosmologies) have actually been formulated under scientific peer review.

So Adelstein has no valid or sound escape from the Stalking Horse objection. It quite kills the FTA. Because our Whatsamagig is far simpler and more precedent-based than theirs, yet makes every observation 100% expected, while God does not.

What About a Multiverse?

Last of all is the real competition: multiverse theory.

This is the most well-supported and second-most widely-concurred Whatsamagig in the actual science of cosmology. In a recent poll of physicists, 34% believe apparent fine-tuning is explained by either Darwinian or anthropic multiverse models. The most widely-concurred Whatsamagig is (surprise!) no fine tuning: it’s all just necessity or brute fact requiring no further explanation (that gets 42% of physicists and 54% of philosophers). Meanwhile, 22% of physicists think it’s something else or are undecided, and only 3% think God did it (analogously to those 3% of climate scientists who deny global warming). So among actual expert scientists, ten times as many conclude it’s a multiverse than conclude it’s a God (while fourteen times as many believe there is no FT even to explain, and over thirty times more think it could be one or the other—and this is in hard science, not an opinion-laden field like philosophy or history, so these numbers carry some real weight).

Multiverse solutions are the most intuitive competitor to God in explaining fine tuning. Hence any disciplined FTA has to be, really, an attempt to compare two hypotheses: multiverse cosmologies and “God did it.” Science vs. Mary Sue. Which exposes the flaw in the FTA: it circularly employs the thing to be explained (that the constants allow life) as evidence for the thing explaining it (God), when that’s exactly what multiverse cosmologists are doing. So the observation cannot evince one explanation over the other. And only one of those two sets of people are producing actual scientific cosmology models—at all, much less that work. The FTA thus ignores the actual state of the science (making it a pseudoscientific argument) and presumes “fine tuning exists” and “only God can explain it” when in fact we do not know fine tuning exists and it is not true that only God can explain it.

There are in fact Six Arguments That a Multiverse Is More Probable Than a God. God also loses against multiverse theories on every other metric—priors and likelihoods. Even Adelstein admits it’s “the best objection to fine-tuning” and that it’s not an “inverse gambler’s fallacy” as some have tried to claim of late. So what does he have to say against it?

  • “The process required for generating multiple universes is complex and requires fine-tuning of its own” is false.

There are plenty of cosmological multiverse models that do not require “fine” tuning. Even eternal inflation is extremely simple in its basic set of assumptions. It can be fully defined, entailing all relevant predictions, in just a few equations of state that are vastly simpler than even a worm’s brain much less a God’s, with nothing really to “tune” in Adelstein’s sense, and unlike a God, are plausibly reductive on known physics. Scientific ex nihilo models do not require tuning to generate multiverses either (Carroll-Chen cosmology can get there, He-Gao-Cai cosmology can get there, Lincoln-Wasser cosmology can get there, Vilenkin cosmology can get there, and so on).

This is the actual appeal of many modern multiverse theories: starting from a single, simple, physical state, an endless multiverse emerges as a logically inevitable output. And these theories actually match and thus explain many bizarre observations today. That’s a good place for a theory to be in. God is not even competing here. Indeed the profoundest advantage of multiverse models now (both in science and philosophy) is that they replace selection with randomness, and yet necessarily get an output of so many variably tuned universes as to make ours inevitable. They also predict more observations than God does, such as the vast age and size and lethality of our universe, and its utter dependence on specific kinds of fundamental physics no God would have any need of, as well as the necessity of brains for thought and microbiogenesis and eonic evolution for life, the absence of moral design or governance in the world, etc.

  • “A multiverse might lead to the proliferation of Boltzmann brains” is a moot point.

Adelstein’s math here is all wrong. I suspect he is just aping the same tinfoil hat that apologetic nutters have been pushing with this like a confidence game lately, and he’s just not astute or critical enough to realize he’s been had. There is in fact no viable Boltzmann brain argument against multiverse theory. That’s all bollocks. Boltzmann universes are vastly more frequent than Boltzmann brains, and indeed that’s a feature of many modern multiverse cosmologies.

On the Carroll-Chen model, for example, the comparative probability is 1 in 10^10^10^56 (for a Boltzmann universe) vs. (at least) 1 in 4^2,000,000,000^20,000,000,000,000 (for a Boltzmann ‘human brain’ that has enough blood in its vessels to be capable of a second’s thought, which is in the same order of magnitude as a Bolzmann rabbit, or about 200 billion functioning cells, complete with all the necessary cytoplasm and mitochondria and ATP and everything else). So spontaneous Big Bangs will occur at least 10^1,000,000,000,000 times more often than even a single Boltzmann brain.

And those brains will instantly die in the irradiated vacuum of space. Which is how we know we aren’t one. Meanwhile, each habitable Boltzmann universe will produce some 10^20 or more actual brains. Place that against vastly rarer universes that will produce only 1 brain in that same span of time, and you know which brain any randomly selected observer is far more likely to be—even before you realize you aren’t immediately suffocating in the dead of space (although, because you wouldn’t have eyes or any other sensory organs, you wouldn’t even know that—you’ll think fewer thoughts than a bowl of petunias). So, really, a patch of spacetime is ten to the trillionth power more likely to spontaneously erupt in a Boltzmann Big Bang than assemble a Boltzmann brain. So there just is no plausible chance for those brains to proliferate faster than ordinary brain-making worlds.

  • And the need of “fine-tuning for scientific discovery” is tinfoil hat.

If you do want to explore why that’s bollocks, see The Myth That Science Needs Christianity. Even Adelstein’s list of lucky things we get to see completely ignores all the things we have the bad luck not to be able to see and all the extremely complicated shit we had to do to even see the things we can, evincing that this is just random, not designed—a classic selection bias fallacy very typical of Christian apologists, who failed to learn Everything You Need to Know about Coincidences. Meanwhile it is very difficult to find things we can’t “even in principle” detect because to not ever be able to detect something it would have to not affect us or our world in any observable way—so how would we know it existed for us to list it among things beyond our observation? The irony here is that the physics underlying the fundamental constants of the FTA is precisely the kind of shit that’s most egregiously hidden from us, suggesting if God is responsible for that, he doesn’t want us to know it. Theology always ends up tied in contradictory knots like this. That’s why it’s ridiculous.

At this point Adelstein makes this convoluted argument:

Fifth, if atheism’s only way out of the problem is to invoke a multiverse, then that still favors theism. Imagine if there was somehow a naturalistic explanation of fine-tuning that invoked the fact that all the atoms said made by God—somehow them saying that would fix the other laws and constants. Well even if that solved fine-tuning, it would still favor theism because the probability of the atoms saying that is much higher on theism than on atheism.

It’s hard to discern his point. I assume it is not an inept way of saying human intelligence begs explanation, because that’s not relevant to the fine tuning argument, which is about fundamental constants making life possible, not “Gosh, how are there people?” Evolution by natural selection already curbstomped that argument a hundred years ago. I think, rather, that Adelstein is saying “if” atheists “must” choose some weird explanation for FT that “itself” contains evidence that is improbable without a God “then” they aren’t escaping the FTA, which is analytically true, but does not describe any existing physical theory of FT.

Adelstein then goes on to claim multiverses are more likely on theism, but that’s neither true nor relevant. There is no non-question-begging way to predict a multiverse from “God exists” (I think Adelstein is referencing an inept argument of his elsewhere that I caught egregious mathematical errors in last time, but he doesn’t articulate this here, so who knows). But that doesn’t matter. Because even if God liked multiverses, they’d all be games and paradises with much better management. So that model is still falsified by observations same as ever. And even if God wanted them all to look like godless universes (for some as-yet-unexplained and totally gerrymandered reason), we still don’t get any evidence that God caused them—rather than the godless causes they are then all designed to look like they came from.

The reason multiverse theory is a problem for the FTA is that it can be derived as an inevitable outcome of simple physics without gods, which is the only possible way to exist without gods. So even if God liked multiverses, that wouldn’t help the FTA. Even at best, FT is equally likely on both natural and divine multiverse models and so cannot discriminate between them; while at worst, because God has more options than nature and the evidence matches nature not God, observations in fact confirm the natural multiverse model, not the divine one.

So Adelstein seems to have forgotten how logic works here.

But he does that a lot. Especially in his final two “arguments”:

  • “fine-tuning is much likelier conditional on theism than on naturalism” is false. I already explained why above: God has other options for making observers; nature doesn’t. And God is no more a priori likely than any natural-cause model.
  • And “for the multiverse to eliminate the force of fine-tuning, the odds of a multiverse have to be high on atheism” is also false. As I’ve already explained, that probability has to be no higher than the extreme improbability of there even being a God, which is not even remotely high.

And that’s it. That’s all he’s got (at least as of today—I can’t vouch for any future changes he might make).

Conclusion

Adelstein is bad at this. Really bad. And being bad at it is why anyone still believes in God. But he’s at least not a fraud like too many apologists are. He is sincere. He’s just naive, uninformed, uncritical, and arrogant (in the sense that he cannot take criticism or admit an error so as to correct it). He never checks his own facts beyond superficially. He never burn-tests his own arguments (The Scary Truth about Critical Thinking). He writes from the armchair without doing much research into what more professional philosophers and scientists on the other side of his every point say and why. And he has a poor command of logic (his reliance on fallacies, non-sequiturs, and erroneous math is prodigious). Which is all probably why he believes in God. If he corrected all his mistakes, he’d be where I am now. And I know. Because that’s how I got here. My vague childhood Deism, and then devout Taoism, did not survive competent review because I made a point of finding out how I would know if I was wrong about anything, and then applied what I learned to find out.

Which is also how I discovered that the Fine Tuning Argument is a terrible argument. Its premises are not known to be true, and so neither can its conclusion be. It does not charitably engage competing theories for the same observations, or the arguments and evidence for those theories. It is pseudoscientific (relying on bogus or misrepresented science, and ignoring or disdaining the opinions of the vast majority of actual scientists) and hubristic (imagining math has been done that never even can be). It’s dependent on circular reasoning and omitting defeating information (like the extremely low prior probability of gods and the much simpler theoretical models now available in physics).

But above all, it fails to properly derive priors or likelihoods. Natural causes are highly empirically and analytically favored over God in our background knowledge, and therefore must necessarily enjoy a much higher epistemic prior probability. And all the weird observations that a Natural cause would uniquely predict are observed; while none of what God would uniquely predict are observed, and therefore Natural-cause models must necessarily enjoy a higher likelihood. And that which has the higher prior and likelihood is always necessarily more epistemically likely, and is thus what should be believed rather than the alternative.

Adelstein has not changed any of these facts. So he is, quite simply, wrong. The Fine-Tuning Argument simply does not work.

§

All comments go to moderation except for Patrons etc. See Comments & Moderation Policy.

Share this:

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading