Clearing the dusty shelves of old unanswered things. One such is the Lowder-Barnes critique of my application of Bayesian reasoning to reverse the fine tuning argument into a case against God, rather than an argument for God. Actually this is not my argument. It is the argument of three prominent mathematicians in two independent studies. My popularization of it (in conjunction with more data from other physical scientists I cited) appeared in my chapter “Neither Life Nor the Universe Appear Intelligently Designed” in The End of Christianity (ed. John Loftus 2011).

The original versions of the argument appeared as cited therein: Michael Ikeda and Bill Jefferys, “The Anthropic Principle Does Not Support Supernaturalism” (an earlier version of which appeared in Michael Martin and Ricki Monnier, eds., The Improbability of God in 2006) and Elliott Sober, “The Design Argument” (an earlier version of which appeared in W. Mann, ed., The Blackwell Guide to the Philosophy of Religion in 2004; which corrects my footnote in TEC).

Cosmologist Luke Barnes critiqued this in a series of posts, and Jeff Lowder concurred somewhat in The Carrier-Barnes Exchange on Fine-Tuning (which also rounds up all the links in the debate, including my contributions). My principal point then was that Barnes wasn’t even responding to my actual argument (and thus neither to any of the mathematicians, one of whom also an astrophysicist, who originated it). He still hasn’t. Barnes had also tried the same tactics against Victor Stenger on much the same point. In my comments debate with Barnes it became increasingly clear he was a kook who simply never understood or addressed what I actually said in my chapter, and continued to refuse to after repeated requests that he do so. A debate with such a person is impossible. One would make more progress arguing with a wall. So I have nothing further to say to him. My chapter as actually written already refutes him. Since he has never responded to its actual content.

But Jeff Lowder is not a kook. He is a responsible philosopher who listens, takes considerable caution, and will strive to get an opponent’s arguments correct. So I am writing this entry today in response to his take on our debate (a take which wisely avoided even discussing most of Barnes’s weird and irrelevant arguments).

Falling for the Kook’s Framing

Lowder agrees with Barnes on a few things, but only by trusting that Barnes actually correctly described my argument. He didn’t. Lowder would do well to revisit my actual chapter, notes and all, without the distorting lens of Barnes’s approach. Lowder’s overall conclusion was (my numbering for convenience):

In particular, I agree with the following points by Dr. Barnes.

  • [1] “Bayes’ theorem, as the name suggests, is a theorem, not an argument, and certainly not a definition.”
  • [2] “Also, Carrier seems to be saying that P(h|b), P(~h|b), P(e|h.b), and P(e|~h.b) are the premises from which one formally proves Bayes’ theorem. This fails to understand the difference between the derivation of a theorem and the terms in an equation.”
  • [3] “Crucial to this approach is the idea of a reference class – exactly what things should we group together as A-like? This is the Achilles heel of finite frequentism.”
  • [4] “It gets even worse if our reference class is too narrow.”
  • [5] “This is related to the ‘problem of the single case’. The restriction to known, actual events creates an obvious problem for the study of unique events.”
  • [6] “Carrier completely abandons finite frequentism when he comes to discuss the multiverse.”
  • [7] “Whatever interpretation of probability that Carrier is applying to the multiverse, it isn’t the same one that he applies to fine-tuning.”
  • [8] “If we are using Bayes’ theorem, the likelihood of each hypothesis is extremely relevant.”

None of these are valid points (the last depending on whether he is using the term “likelihood” correctly).

As to [1]: ever since the Principia Mathematica it has been an established fact that nearly all mathematics reduces to formal logic (the exceptions, captured by Gödel’s Theorem, are obscure and not relevant to the present case, since the relevant probability theory can be deduced from Willard Arithmetic which is immune to Gödel’s theorem). Thus in the class of theorems we are discussing, all mathematical theorems are tautologically identical to syllogisms. Which are arguments. I outline how one can reproduce Bayes’ Theorem as a syllogism in Chapter 4 of Proving History, pp. 106-14.

I didn’t carry out the reduction, but anyone familiar with both Bayes’ Theorem (hereafter BT) and conditional logic (i.e. syllogisms constructed of if/then propositions) can see from what I show there that BT indeed is reducible to a syllogism in conditional logic, where the statements of each probability-variable within the formula is a premise in formal logic, and the conclusion of the equation becomes the conclusion of the syllogism. In the simplest terms, “if P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y, then P(h|e.b) is z,” which is a logically necessary truth, becomes the concluding major premise, and “P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y” are the minor premises. And one can prove the major premise true by building syllogisms all the way down to the formal proof of BT, again by symbolic logic (which one can again replace with old-fashioned propositional logic if one were so inclined).

So, yes, Bayes’ Theorem is an argument. More specifically it is a form of argument, that is, a logical formula that describes a particular kind of argument. The form of this argument is logically valid. That is, its conclusion is necessarily true when its premises are true. Which means, if the three variables in BT are true (each representing a proposition about a probability, hence a premise in an argument), the epistemic probability that results is then a logically necessary truth—not in the sense that it can’t be false, but in the sense that you cannot deny the conclusion without either rejecting logic or denying that one of the premises (the assigned probabilities) is true. Now, one can indeed challenge a valid argument’s conclusion by challenging its premises. But it is absurd to claim that it is therefore not an argument. This is the kind of kookery Barnes is promulgating and I’m a bit perplexed to see Lowder fall for it.

As to [2]: The quotation is literally nonsensical. I cannot understand why Lowder even thinks the statements here are intelligible. The derivation of the theorem is this. The variables are not “the derivation,” they are propositional statements, in symbolic notation. For example, “P(h|b)” is symbolic notation for the proposition “the probability that a designated hypothesis is true given all available background knowledge but not the evidence to be examined is x,” where x is an assigned probability in the argument.

Barnes could have said that Bayes’ Theorem is not itself an argument but the form of an argument—as I just now said myself—but that would expose the kookery of his point, since I am not using “the form” of the argument stripped of input, I am stating inputs. He’s trying to challenge the inputs, not the logical formula from which I derive a conclusion from those inputs. (At least I hope! If Barnes actually thinks BT is logically invalid, he has gone beyond the status of kook into the camp of the outright insane.) But that’s not what he argued. What he argued literally makes no sense. So I cannot fathom what Lowder thinks he is agreeing with.

As to [3]: The reference class is indeed a question at issue. One I addressed in my chapter. What I said about it, Barnes never interacts with. So again I have no idea what Lowder is agreeing with. On the general problem of deriving frequencies from reference classes, Bayesians have written extensively (philosophers and mathematicians, many of considerable note). Barnes can go argue with them if he wants. On the particular problem relating to this case, I will get to that later. But note that the argument that Barnes is attacking does not even use a prior probability. Our argument is that the evidence entails that fine tuning reduces the probability of God to the prior probability of God. What one then says that prior probability is, is a wholly different question.

Barnes never addresses anything I said about what that prior likely was or how I derived my estimate of it (which estimate was, incidentally, the grossly generous value of 25%). Instead Barnes attacked what I addressed in the chapter as the “threshold” probability discussed in note 31 (p. 411). Yet despite my repeatedly asking him to address what I actually said about that, he never did, and consistently ignored that note and its content (and then incorrectly described what I said in the following note 33 instead!). Why Lowder thinks that is good form in a debate escapes me. If an opponent refuses to address what you said, they simply aren’t debating with you anymore.

As to [4] and [5]: These are the same argument, that the universe is a single case, thus not amendable to frequency analysis by reference class. This is false. In fact, it should be so obviously false to a cosmologist I cannot fathom how a cosmologist could make this argument. In general, unique cases are amendable to frequency analysis by reference class. Because every event in the universe is unique. What it shares with other events is what constructs a reference class (see my example of the “murder of William II” in Proving History, pp. 273ff.). Thus uniqueness is a red herring. There are always abstract features anything shares with many other things, from which an epistemic probability arises. Barnes, however, repeatedly confused epistemic with “true” probabilities, demonstrating he doesn’t know how Bayesian epistemology works, certainly not well enough to be a competent critic of it (on the distinction, see PH, Index, “epistemic probability”). He doesn’t even know about the role of hypothetical reference classes in epistemology (see PH, pp. 257ff.). And notably, he never once interacted with my actual argument for the prior, and thus never rebutted it (in TEC, pp. 280-84, notably many pages before I got to the fine tuning case). Nor did he interact with any of my actual arguments for the unknown coincidence threshold (that note 31 I just mentioned he kept ignoring despite my continually asking him to respond to it).

This last is the more bizarre gaffe of his, because calculating the range of possible universes is a routine practice in cosmological science (as we’ll see Barnes himself later insists!). Any cosmologist will tell you that, so far as we know (remember, we are talking about epistemic probabilities!) our universe is not the only logically possible one to have arisen. That in fact it is not sitting in a reference class of one, but a reference class of an infinite number of configurations of laws and constants. One can indeed say that the frequency of life-bearing universes in that possibility space is unknown. That is indeed one of the things I say myself in the chapter Barnes pretends to be answering (remember, Barnes never interacts with my actual arguments). But it is absurd to say that there is only one universe in that possibility space. The more so as that’s not the reference class we need use anyway. There are indeed non-hypothetical elements to count up in reference instead (all the things that happen without need of gods vs. all the things we have never found god causing: TEC, pp. 282-84; in other words, the prior probability of naturalism).

Barnes would notice that if he didn’t also repeatedly confuse my estimating of the prior (at 25% “God created the universe”) with the threshold probability of coincidences (a distinction I illustrated with the “miraculous machinegun” argument I discuss, a discussion Barnes never actually interacted with, in TEC, pp. 296-98). I tried explaining the difference to him. But that all fell on deaf ears. It’s not clear to me that Lowder even understands the difference. But the difference is crucial, because the fact that the threshold for coincidence is unknown in this case renders all arguments from that threshold invalid. Leaving only the prior probability we started with (that 25% I grossly over-estimated at the start of the chapter).

Here is why (this is a complete reproduction of the note 31 that Barnes and Lowder both ignore and never respond to, despite my repeatedly asking Barnes to do so):

At this point one might try to argue that the prior probability (for the universe case) should be based this time on a narrower reference class of “super improbable” events, such as the set of all things William Dembski quantifies with his probability threshold of 1 in 10^150 (see notes 6 and 13 above), based on the assumption that the ratio of designed-to-chance causes within that set should strongly favor design. But even if this could get us to any actual ratio of NID to ~NID (see discussion of prior probability earlier on why, for lack of data, it probably can’t), it is still inapplicable to the universe’s origin because that threshold was based on the size and age of the universe itself.

We are talking about an event beyond that limiting sphere, and thus must calculate a threshold relative to a larger total set of opportunities, which is precisely what we don’t know anything about. For instance, if the universe in some form will continue to exist for 10^1,000,000 years, then it could easily contain an event as improbable, and that event would as likely be its origin as anything else. In fact, since quantum mechanics entails that a big bang of any size and initial entropy always has some (albeit absurdly small) probability of spontaneously occurring at any time, and since on any long enough timeline any nonzero probability approaches 100 percent no matter how singularly improbable, it could easily be that this has been going on for untold ages, our big bang merely being just one late in the chain. We could be at year 10^1,000,000 right now, and as this conclusion follows from established facts and there is no known fact to contradict it, it’s no more unlikely than the existence of a god (and arguably a great deal more likely).

Since we therefore don’t know what the applicable probability threshold is, we can’t use one (other than by circular logic). To infer design we simply need the result to have features more expected on design than chance, and features that are necessary for observers even to exist will never be such (because those features will appear in both outcomes 100 percent of the time). Dembski’s threshold may pertain to events now in the universe, however, precisely because those outcomes are not necessary. For example, if the total probability of terrestrial biogenesis were 1 in 10^1,000,000 every fourteen billion years, then we would expect to find ourselves much later in the history of the universe—it would not necessarily be the case that we would observe ourselves only fourteen billion years after the big bang; whereas it would necessarily be the case that the universe came to exist with the right properties for us to be observing it at all. Hence the two problems are not commensurate.

This argument Barnes never rebutted. In short, since the only universes that can ever be observed (if there is no God) are universes capable of producing life, if only fine tuned universes are capable of producing life, then if God does not exist, only fine tuned universes can ever be observed. This counter-intuitively entails that fine-tuning is 100% expected on atheism. But a lot of things in probability theory are counter-intuitive—as the Monty Hall Problem illustrates (which also numerous mathematics professors made fools of themselves denying the facts of). So its hurting our brains is not an argument against it. It remains true all the same.

Now, there are two points Barnes tried to attack about this, one was the result itself—that P(fine tuning|atheism) = 1, based on his attempt to reimagine a prior probability of observers—which I’ll get to later; for now, the other was something to do with this threshold probability, i.e. the (faulty) intuition that such an amazing coincidence has to be designed. What I showed is that the threshold probability is a non-starter. You can’t avoid the conclusion by insisting that “at some degree of improbability” it “has” to be design. For some things you can. But not for the universe itself. That is what I demonstrated in the argument quoted above. The argument Barnes repeatedly ignored.

We are simply always left with the prior probability that gods exist and design universes and things. That’s all we have to go on. It literally does not matter how improbable our universe is (as I explain in detail in TEC, pp. 292-98, also ignored by Barnes). That is indeed counter-intuitive. But it is simply the fact of the matter.

As to [6] and [7]: Not only did I never argue my chapter’s conclusion from a multiverse, I explicitly said I was rejecting the existence of a multiverse for the sake of a fortiori argument. That Barnes ignored me, even though I kept telling him this, and he instead kept trying to attack some argument from multiverses I not only never used but explicitly said I wasn’t using, is just more evidence of his kookery. Why Lowder thinks it’s a valid point astonishes me to no end. I did outline an argument from multiverses in note 20 (TEC, pp. 408-09, which I only expanded on elsewhere—and note that objections based on transfinite sets are addressed extensively in the comments there). But I immediately ended that note with “But we don’t need this hypothesis, so I will proceed without it.” Without it. Case closed. Which is why this is so weird. Because this is where Barnes flips his lid about “finite” frequentism (in case you were wondering what that was in reference to). Note I at no point rely on transfinite frequentism in the argument of my chapter, because even where I touch on it here in note 20, I explicitly set aside the result and did not employ it anywhere in my chapter’s argument. I told this to Barnes repeatedly. He never listened to me. And yet everything he said about multiverses and finite frequencies is completely irrelevant to my chapter’s argument.

As to [8]: This statement simply repeats what I myself argue in my chapter in TEC. Illustrating how much Barnes is simply not even interacting with that chapter’s actual argument. Notice that even my note 31 repeats this statement: “To infer design we simply need the result to have features more expected on design than chance.” But fine tuning can never be such a feature, because “features that are necessary for observers even to exist will never” exist “more frequently” in God-made universes than non-God-made universes. In fact it’s the other way around: only God-made universes can contain life without being fine tuned (via application of his miraculous powers). Meanwhile, all universes not made by God that contain observing life will be fine tuned. If there is no God, you will never observe yourself to be in a non-finely-tuned universe. That is literally logically impossible. Unless, of course, fine tuning isn’t necessary for life. In which case, it can’t be evidence for life either.

On every point so far, it appears Lowder bought Barnes’s bizarre kook-worthy framing of the debate, hook-line-and-sinker, and didn’t notice that in not one case is he making a correct statement about the argument of my chapter.

A Brief Red Herring

After stating those mysterious agreements with Barnes, Lowder implies that a frequency interpretation is not necessary to this case, based on the common belief that there are non-frequency interpretations of probability, when in fact I have demonstrated there aren’t. Even “degrees of belief” are a frequency measure (they are simply statements about the frequency of beliefs based on comparable scales of evidence being true), which end up reducing to propositions about ordinary frequencies (as estimations of the true frequency, which conclusion we can prove by observing that they are always adjusted toward the true frequency as information about the latter is acquired). I demonstrate this in Proving History, pp. 265ff. One can debate that (though I’m quite confident you will lose; and in any case, you need to read those pages first, and it’s a separate debate from what we are talking about now). But that’s moot for the present, since Lowder doesn’t produce any argument from this statement of his against my chapter’s argument.

Part Deux

What Lowder then argues is that Barnes goes wrong in misrepresenting my argument. Lowder is correct. Actually, Barnes gets my argument wrong at pretty much every single point. But Lowder points to one example of his own interest, where Barnes mistakenly thinks that when I say it “follows that if we exist and the universe is entirely a product of random chance…then the probability that we would observe the kind of universe we do is 100 percent expected,” I was referring to its exact structure, when I make very clear in the chapter that that is not what I am talking about. I explicitly outline that I am only referring to the generic features necessary for life (e.g. that there be fine tuning, not that it be this specific selection of it). So instead of this being, as Barnes foolishly said, a fallacy of “affirming the consequent,” I was in fact stating a literal tautology. If fine tuning is necessary for life, and there is no God, then necessarily life will only ever exist in correlation with fine-tuning. This is because all universes without fine tuning will thus by definition not contain life. Therefore life will never observe itself being in any other kind of universe than one that’s fine tuned. Even if God did not cause it! Whereas, a God-made universe alone could contain life without fine tuning—because a god can work miracles. This is the actual argument of my chapter (on the matter of cosmology). Barnes to this day has never responded to it.

But Lowder then wonders if I am correct to have said, “Would any of those conscious observers,” in a randomly generated universe, “be right in concluding that their universe was intelligently designed to produce them? No. Not even one of them would be.” Lowder queries:

It would be most helpful if Carrier would explicitly defend this statement: “No. Not even one of them would be.” Unless I’ve misunderstood his argument, I think this is false. If we include in our background knowledge the fact that Carrier’s hypothetical conscious observers exist in a universe we know is the result of a random simulation, then we already know their universe is the result of a random simulation. Facts about the relative frequency aren’t even needed: we know the universe is the result of a random simulation. If, however, we exclude that from our background knowledge, so that we are in the same epistemic situation as the hypothetical observers, then things are not so easy. Again, it would be helpful if Carrier could spell out his reasoning here.

This statement tells me Lowder did not read my chapter, or at least not with sufficient care. He is still buying into Barnes’s bonkers framing of the debate, instead of actually going to the source and reading what I actually argued. The argument they are excerpting starts by explaining that we are (as outsiders) judging the judgment of people inside a simulation in which all the universes are randomly generated (by us). That’s the context. Here then is the very next sentence after the one Barnes and Lowder quoted from my chapter:

Would any of those conscious observers be right in concluding that their universe was intelligently designed to produce them? No. Not even one of them would be. If every single one of them would be wrong to conclude that, then it necessarily follows that we would be wrong to conclude that, too (because we’re looking at exactly the same evidence they would be, yet we could be in a randomly generated universe just like them).

Note that what is being said here is that they would not be right to conclude that. That is, if anyone said “we see fine tuning, therefore intelligent design,” they would be full wrong. We, the outsiders observing them, would be the ones who realize they are wrong, and why. And what does this teach us? That we might be them. Consequently we also cannot claim “we see fine tuning, therefore intelligent design.” Because the example proves to us that fine tuning never entails that. To the contrary, every randomly generated universe that has life in it will be finely tuned. That is what the example illustrates.

Therefore, in cosmology, there is no meaningful correlation between fine tuning and intelligent design. It’s equally likely either way. Or worse, because, unlike godless physics, God can make life-bearing universes without fine-tuning, therefore it is actually slightly less likely we’d be in a God-designed universe if we observe fine tuning. Because there is a nonzero probability God would make an un-tuned a universe to contain us, whereas there is a zero probability godless physics ever will (note that “God” here does include “techno-gods,” i.e. non-supernatural designers, when we include simulated universes in the range of possibilities, but that doesn’t change the overall argument, since fine tuning also does not provide evidence of techno-gods, for the exact same reason: see my address of the techno-god scenario earlier in the chapter, TEC, p. 281). By contrast, the correlation between observers in godless universes and fine tuning is fully 100%. Every time there is the one, there will always be the other.

Yes, this does mean that we could still be in a God-tuned universe (and so could the people in the simulation example conclude is possible for them as well), but that simply reduces to the prior probability. The fine tuning makes no difference to the probability. So if it starts 50/50, it remains 50/50. And so we’d (and they’d) still be wrong to conclude they were in an intelligently designed universe, merely from observing fine tuning. This should, of course, be the obvious point of my chapter. Since I say it over and over again with multiple examples (not just this one).

Part Trois

Barnes claims to have hundreds of science papers that refute what I say about the possibility space of universe construction, and Lowder thinks this is devastating, but Barnes does not cite a single paper that answers my point, or that answers the scientists I cited (like Stenger and Krauss): that none of these attempts to calculate the possibility space for universes actually determines the frequency of possible universes that would contain life. And since I wrote this article, numerous leading cosmological physicists went on record siding with me on this, so Barnes is pretty well cooked here. I’m voicing the expert consensus. He’s ignoring it. He is thus simply wrong. Because we don’t know how many variables there are. We don’t know all the outcomes of varying them against each other. And, ironically for Barnes, we don’t have the transfinite mathematics to solve the problem. I am not aware of any paper in cosmology that addresses these issues, and actually concludes a non-speculative number for how many universes will contain observers. The consensus is: we don’t know. We have neither the data nor the tools to know.

This is another example of where Lowder sadly is misled by Barnes’s misrepresentation of my argument. In this case, it’s not even an argument in my chapter in TEC, and thus actually has nothing to do with my use of Bayes’ Theorem. It’s unclear if Lowder even realizes this…Barnes has skipped to quoting and arguing against a completely unrelated blog post of mine. And then he fakes what I said in it by separating one line from its very next sentence. My argument in the article was, “We actually do not know that there is only a narrow life-permitting range of possible configurations of the universe.” Barnes can cite no paper refuting that statement. I give two reasons why. Barnes pretends I only gave one. And then when he gets to the second, he forgets the relevance of my second argument to the first.

Only one of my two arguments for that general thesis (that we don’t know) is that some studies get a wide range not a narrow one (these are cited by various experts including Stenger and Krauss; it is impossible that Barnes is unaware of the papers that argue this, if he has indeed surveyed them all; I know they exist, because I’ve read more than one; e.g. Fred Adams, “Stars in Other Universes: Stellar Structure with Different Fundamental Constants,” Journal of Cosmology and Astroparticle Physics 8 [August 2008]…note this is not the “monkey god” thing Barnes spitefully loathes; which suggests to me he is not being honest in what he claims to know about the literature). So we have inconsistent results. That is one reason to conclude we don’t really know. [Though Barnes has subsequently convinced me that there there could be good rebuttals to all of these, so I won’t depend on them further.] Then I go on to give the second reason, which is that even those papers are useless.

Notice Barnes does not tell his readers this. Notice that Lowder didn’t even notice that I said that. Lowder appears to have been duped by Barnes into thinking I said it was a fact now that “the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4…).” Nope. Because my very next sentence, the sentence Barnes hides until later, and pretends isn’t a continuation of the same argument, says: “And even those models are artificially limiting the constants that vary to the constants in our universe, when in fact there can be any number of other constants and variables, which renders it completely impossible for any mortal to calculate the probability of a life-bearing universe from any randomly produced universe. As any honest cosmologist will tell you.” Barnes is not an honest cosmologist. Again, not one of the papers he compiles a list of addresses this problem—or the mathematical problem, which I even explicitly cited the latest paper on, and yet which notably Barnes erases from his quotations of me, evidently preferring to pretend it didn’t exist than attempt to answer it.

How Lowder thinks this is even honest debate, much less “devastating,” is again bewildering to me. This is creationist tactics: misrepresent what someone says by clever quote mining, make false claims about the literature, hide the contrary literature (even when your opponent cites it), and never address what your opponent actually said, while blathering with bombast at how stupid he is for making such a stupid argument that in fact he never made. Lowder should not be falling for this routine.

It’s just worse when Lowder’s red flag detector didn’t go off when Barnes argues that there “can’t” be other constants (i.e. other forces, dimensions, and particles than the ones in our universe) because, “For a given possible universe, we specify the physics. So we know that there are no other constants and variables. A universe with other constants would be a different universe.” This is an absolute howler of an argument. It should have puzzled Lowder, not evoked an “I think I agree with this.” Walk through the thinking here. We know there cannot (!) be or ever have been or ever will be a different universe with different forces, dimensions, and particles than our universe has, because “we specify the physics” (Uh, no, sorry, nature specifies the physics; we just try to guess at what nature does and/or can do) and because “A universe with other constants would be a different universe.” WTF? Um, that’s what we are talking about…different universes!

I literally cannot make any sense of Barnes’s argument here. I cannot even imagine what Lowder thinks that argument was. Obviously, among all the possible universes that could result from random chance, infinitely many will indeed be different from ours, and will indeed have different forces, dimensions, and particles than ours. It is appalling that any self-respecting cosmologist would attempt to deny this, or try to fool people into thinking they were denying it. If Barnes has some fabulous logical proof that universes with different forces, dimensions, and particles than ours are logically impossible, I definitely want to see that proof, because it would be a great asset against creationism! I won’t hold my breath.

It’s unclear why Barnes’s subsequent points are even relevant to anything I said, or what relevance Lowder sees in them warranting a mention. But as Lowder says nothing substantive at this point, there is nothing more to respond to. Otherwise, this digression only relates to how “fine” the fine tuning needs to be. It could be “not very.” But I would still count that as fine tuning for my argument in TEC, since even at best it’s still well enough below even odds, which is what is supposed to carry an argument for design.

Part Quatre: The Real Heart of the Matter

Finally, Barnes switches back to my chapter in TEC. Why he interluded on that unrelated blog article, I don’t know (he says someone pointed him to it, so I guess it was a squirrel that distracted him). But he goes back now because of our debate in comments on my blog, which annoyed him to no end, so now he tries to rebuild his argument against my Bayesian argument about fine tuning in the TEC chapter. But all he does is again completely ignore everything in my chapter.

Once again Barnes tries to argue a point without addressing my responses already to it in TEC. This is the wall I’m arguing with. Why Lowder thinks a wall is a good debate opponent is another mystery. Others then finally pointed Lowder to another endnote in my TEC chapter that reveals what Barnes is ignoring (actually, just one of a dozen things in that chapter that he is ignoring), and Lowder seems to agree with it. At least he says he does. Which would mean Lowder disagrees with Barnes, and in fact concurs that Barnes has failed to address my actual argument, and has never rebutted it, despite abundant handwaving. And indeed, Barnes has never correctly described my argument. And thus in fact has never rebutted it.

Lowder, being the most charitable fellow on the planet, decides (in practice, not in word) to give up on Barnes and instead try to make Barnes’s argument for him, since Barnes evidently can’t. So now we have something solely from Lowder, an argument Barnes could never intelligibly articulate. And this pertains to what I mentioned before: Barnes’s constantly blundered and failed attempts to argue I’m wrong to conclude that P(fine tuning|atheism) = 1. Note that this is actually not “my” conclusion. It is the conclusion of three mathematicians (including one astrophysicist) in two different studies converging on the same result independently of each other. I merely marshal a lot of analogies and arguments to explain and back it up. All of which Barnes ignores. (Barnes also ignores the original papers I’m summarizing.)

Barnes wants to get a different result by insisting the prior probability of observers is low—which means, because prior probabilities are always relative probabilities, that that probability is low without God, i.e. that it is on prior considerations far more likely that observers would exist if God exists than if He doesn’t. It’s unclear to me that Barnes actually realizes this…he does not appear as facile with Bayes’ Theorem as his love of equations suggests…but this is what he has to argue. Because the only way the prior probability of observers can be low, is if the prior probability of observers is high on some alternative hypothesis. It can only be low with respect to an alternative. I never get any clear impression that Barnes understands this. So he is really, in fact, arguing for the existence of God. He is basically saying, “we are so amazingly unlikely, therefore God must exist.” He might not realize that’s what he’s arguing. But it is. And this is the reasoning refuted by Sober, Ikeda, and Jefferys. Whose arguments Barnes never rebuts. [My wording in this and the next paragraph is atrocious. I was trying to refer to using as the prior probability of the competing hypotheses, the posterior probability of a previous run of the equation on the sole evidence of there being observers before adding the observation of fine tuning. I clarify here.]

This is of course moot, anyway, because, in line with Sober, Ikeda, and Jefferys, I show in my chapter that no such conclusion about the priors is at all possible. Barnes never interacts with my arguments on this point. The fact of the matter is we do not know that the prior probability of there being observers at all (within a universe) is higher on the God hypothesis than on the contrary. It does not logically follow from “God exists” that God would produce other observers at all, much less do so by making a finely tuned physical universe that produces said life by non-miraculous physics (much less such a life-hostile universe, a point with which Lowder agrees; see my articulation of this point in TEC, pp. 294-95). Whereas that we would observe ourselves in a finely tuned physical universe that produces life by non-miraculous physics if there happens to be no God is 100% certain. Because there is no other logically possible universe we could observe ourselves in (if of course we include techno-Gods in the category of “God,” as I already noted).

This remains the case even if the odds of a life-bearing universe forming without God are 10^1,000,000 to 1 against. Or any odds whatever. Because no matter what those odds are, the existence of observers is still just as likely on God or not-God, and thus simply remains the prior probability that such a God exists at all. Which, remember, I set at an over-generous 25% at the start of the chapter. Which entails a 75% chance observers exist and God doesn’t. We might then be the product of an amazing coincidence. But so would the existence of a God be an amazing coincidence. The balance is a wash. (The incalculable luckiness of God I’ve discussed elsewhere.) As I explained in note 31 (fully quoted above) we do not know anything whatever that can change this balance of odds. We simply do not know that our existence is any more remarkable on either hypothesis. And since all godless universes that will ever be observed will be fine tuned, fine tuning can never be evidence for God.

This is explicitly the argument of my chapter in TEC, as I here quote from the main text on page 294:

This conclusion cannot rationally be denied: if only finely tuned universes can produce life, then if intelligent observers exist (and we can see they do), then the probability that their universe will be finely tuned will be 100 percent. Always. Regardless of whether a “finely tuned universe” is a product of chance, and regardless of how improbable a chance it is.[n. 23] Because “intelligent observers exist” entails we could never observe anything else. The only way the odds could ever be anything less than 100 percent is if you can have intelligent observers without a finely tuned universe (as then, and only then, it would at least be logically possible for there not to be a finely tuned universe if there are intelligent observers). But as it happens, you can only have that (a non-finely tuned universe with intelligent observers) in an intelligently designed universe. Ironic, yes. But true.

That’s the fact of it. And Barnes has never produced a valid argument to the contrary. This is where note 23 in my chapter comes in (which number’s placement in the text was shown above):

This is undeniable: if only a finely tuned universe can produce life, then by definition P(FINELY TUNED UNIVERSE|INTELLIGENT OBSERVERS EXIST) = 1, because of (a) the logical fact that “if and only if A, then B” entails “if B, then A” (hence “if and only if a finely tuned universe, then intelligent observers” entails “if intelligent observers, then a finely tuned universe,” which is strict entailment, hence true regardless of how that fine-tuning came about; by analogy with “if and only if colors exist, then orange is a color” entails “if orange is a color, then colors exist”; note that this is not the fallacy of affirming the consequent because it properly derives from a biconditional)…

Notice how I explicitly refute the charge of “affirming the consequent,” even mentioning the fallacy by name, yet Barnes leveled that charge at me anyway, without ever once even mentioning that I already directly addressed it, and without ever interacting with my actual argument against that charge, or even describing my actual argument against that charge. That’s the kind of character we are dealing with here. Lowder needs to stop being charitable with this guy.

Note 23 directly continues:

…and because of (b) the fact in conditional probability that P(INTELLIGENT OBSERVERS EXIST) = 1 (the probability that we are mistaken about intelligent observers existing is zero, à la Descartes, therefore the probability that they exist is 100 percent) and P(A and B) = P(A|B) × P(B), and 1 × 1 = 1. [Christian apologist Robin] Collins concedes that if we include in b “everything we know about the world, including our existence,” then P(L|~God&A LIFE-BEARING UNIVERSE IS OBSERVED) = 100 percent (Collins, “The Teleological Argument,” 207).

[Collins] thus desperately needs to somehow “not count” such known facts. That’s irrational, and he ought to know it’s irrational. He tries anyway (e.g., 241–44), by putting “a life-bearing universe is observed” (his LPU) in e instead of b. But then b still contains “observers exist,” which still entails “a life-bearing universe exists,” and anything entailed by a 100 percent probability has itself a probability of 100 percent (as proven above). In other words, since the probability of observing ~LPU if ~LPU is zero (since if ~LPU, observers won’t exist), it can never be the case that P(LPU|~God.b) < 100 percent as Collins claims (on 207), because if the probability of ~LPU is zero the probability of LPU is 1 (being the converse), and b contains “observers exist,” which entails the probability of ~LPU is zero.

If (in even greater desperation) Collins tried putting “observers exist” in e, b would then contain the Cartesian fact “I think, therefore I am,” which then entails e. So we’re back at 100 percent again. If (in even greater desperation) Collins tried putting “I think, therefore I am” in e, his conclusion would only be true for people who aren’t observers (since b then contains no observers), and since the probability of there being people who aren’t observers is zero, his calculation would be irrelevant (it would be true only for people who don’t exist, i.e., any conclusion that is conditional on “there are no observers” is of no interest to observers).

This is pretty devastating. Barnes never mentions this argument and never responds to this argument. In fact he never rebuts any of my actual arguments. And this is no exception. Lowder has to concur. But tries to help Barnes out as best he can:

I agree with his analysis, but — you knew there was a “but” coming — I think this misses the point, which seems to be a restatement of the anthropic principle dressed up in the formalism of probability notation. Yes, if we include “(embodied) intelligent observers exist” in our background knowledge (B), then it follows that a life-permitting universe (LPU) exists. But that isn’t very interesting. In one sense, this move simply pushes the problem back a step.

Note that Lowder is now ignoring my argument. Because I already showed what happens when you push it back a step. You end up with Cartesian existence in b. Which entails observers exist. We are back at 100%. And then I showed what happens when you push it back even another step, and remove even our knowledge of ourselves existing from b. You end up making statements about universes without observers in them. Which can never be observed.

There is no escaping this. Either you are making statements about universes that have a ZERO% chance of being observed (and therefore cannot be true of our universe), or you are making statements that are 100% guaranteed to be observed. There is no third option. And this entails the conclusion. Because if a condition has a ZERO% chance of being observed, then it can never pertain to us. Because we observe we exist. So what “would be observed” if we didn’t exist has no relevance to explaining our existence. Because nothing can ever be observed if we (observers) don’t exist. And the converse of 0% is 100%. You always end up with the same 100%: we can never observe any other universe. Period.

Again, I mean not this exact precise universe, but a universe with the generic features alleged to indicate design, such as fine tuning. Because if no one will ever exist in a non-finely tuned universe so as to observe it—and they won’t—then we will only ever observe finely tuned universes. Therefore, fine tuning is always 100% expected for all observers. Period.

There simply is no escape. Except on the God hypothesis—but admitting that God (and indeed God alone) allows us to observe non-finely tuned universes, crushes the hopes of creationists further, because that entails that fine tuning is less likely to be observed if God exists…which makes fine tuning evidence against God! So they can’t go that route. And there is no other route to go.

The bottom line is, fine tuning can never be evidence for God. Never. Not ever. Not in any logically possible universe. Because all logically possible universes with observers in them but without gods in them will be fine tuned. All of them. Every last one.

This is what Barnes doesn’t get. And what Lowder is struggling to understand as well. It grates against his intuition. I know. But intuition sucks at things like this. Trust the logic. Your intuition was built for savanna apes. Not for existential probability calculus.

That Lowder doesn’t understand the point yet is revealed by his closing statement. In response to my conclusion that there are statements that are “true only for people who don’t exist” and therefore of no interest to us, Lowder says “Dr. Carrier doesn’t speak for all observers. I’m an observer and find the question of interest.” This is an odd thing to say. Because I was not saying it isn’t interesting to think about. What I was saying is that it can never be relevant to us. Things that are true about universes that lack observers are not things of any relevance to us because we don’t live in one of those universes. We never can. And never would. So, yes, it might be amusing “to think about,” but it still won’t be relevant. Because we will never observe ourselves in one of those universes. More importantly, we never can. And never would. So that they might exist is moot. The probability of observing them is still ZERO%. Therefore the probability of observing a finely tuned universe instead is still 100%. And that’s why fine tuning can never be evidence for God.

Remember those people in the simulated universes that were randomly generated? They will never observe themselves in a non-finely-tuned universe. Because they logically never could. That is why fine tuning can never be evidence for intelligent design. It produces no likelihood ratio favoring it. And it never could. Not ever. That might be hard to comprehend. But don’t shoot the messenger. I’m just telling you how it is.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading