In response to some recent queries, and being reminded of some things in an old slideshow of mine, which accompanied a talk I gave in Humboldt some years ago (which was supposed to be my opening for a debate with a creationist; but the creationist bailed as soon as they heard I’d be the one debating them), I realized I had content that needed to be published concerning three common mistakes creationists make. Creationism is practically dead these days. But it’s not all dead. It’s only mostly dead. Its bad ideas still have to be continually countered publicly so it can’t make headway into uninformed minds. Particularly as it plays on “common sense intuitions,” when we know one thing modern science has repeatedly proved is that “common sense intuition” is almost always wrong. This is a hard pill to swallow, and a hard lesson to learn, but it is crucial for any would-be rational mind. Because most critical thinking skills involve discarding “common sense intuitions.”

Creationism vs. Science

My slideshow alone, though meant to be a discussion piece (not every point I would make from it is spelled out in the slides), is still a decent primer on why evolution by natural selection explains all life on Earth, and all the peculiar facts of it, far better than Creation Theory. Especially Young Earth Creationism, which is just silly; but also Old Earth Creationism, which is more what I’m interested in challenging, since YEC, like flat Earth theory, always ends up refuting itself. OEC at least admits to most of the established facts, and just tries to “theory its way” out of the consequences. YEC is just ignorant. Just like flat Earth theory. If your worldview is based on denying all the facts, you’ve simply chosen to divorce yourself from reality. If that’s your jam, you’ll never learn the truth about anything. But most people can’t live in fantasy-land their whole lives. Eventually they figure out YEC is based on lies and ignorance, and leave of their own accord. But OEC still remains to seduce them. Because its approach is more sophisticated and requires a higher-level of understanding.

OEC takes a three-step approach: admit to almost all the facts (some sort of Big Bang started everything we see fourteen billion years ago, Earth formed over four billion years ago, life started simple and has been changing and building in complexity over billions of years, humans have been around for hundreds of thousands of years, and so on); propose a “common sense” interpretation of those facts in opposition to the actual scientific conclusions we’ve reached about them; then market this theory with Standard Apologetic Tactics, like building a straw man, leaving out evidence, and working backwards to validate your theory—starting with a conclusion and looking around for evidence to bolster it—rather than challenging your theory with falsification tests; as in, starting with a hypothesis (not a “conclusion”) and, honestly and competently, trying to refute it; and then only believing it after that attempt sufficiently fails. Which is the actual scientific method. Which we now know works better than any “apologetical” method, because we’ve seen centuries of its results by now. But while apologetics wants to look like science, it never operates on any of the proven methods of science, but, instead, abandons them.

That’s because it has one goal: to get a predetermined result despite reality. On apologetical methods, and bad methods in general, I have written aplenty (see my articles on Critical Thinking). I have also written on why Naturalism far excels over Theism in explaining the facts of the world (see my articles on Atheism and my articles on Naturalism; but start with my centerpiece: Bayesian Counter-Apologetics: Ten Arguments for God Destroyed). And in numerous articles I’ve specifically covered Biogenesis and Fine-Tuning. I’ve also written specific articles on The Problem with Nothing and The Argument from Specified Complexity against Supernaturalism. Today I will supplement all these materials by filling three specific gaps in understanding I still often run into: the notion that Design seems to meet the conditions of an Argument to the Best Explanation.

Isn’t Design Always the Best Explanation?

“What is wrong,” one might say, “with the hypothesis that, since things in nature that look designed often are designed, then we can also infer that we were designed?” Isn’t that a valid Argument to the Best Explanation? This is an example of “common sense intuition” being evoked against the results of more competent scientific methods. If watches are designed, why shouldn’t we conclude cells and bodies are designed? Aren’t they the same? Isn’t this a much simpler explanation than all the complex mechanisms and historical processes proposed by science?

Of course, there are problems with the simplicity argument. God is actually vastly complex, and his existence vastly improbable, and, in contrast to the mechanisms science has proposed, poorly evidenced, and thus not in fact a “simple” explanation of anything. Natural theories are simpler now. But the error I want to call out here today is the one of mistaking how the analogy works: thinking “complex or intricate” simply always is more likely explained by “intelligent design.” This “intuition” is precisely what Darwin discovered to be false; and what has since been multiply confirmed to be false. Science didn’t just “make up” all the evidence it has collected in the century since Darwin’s counter-intuitive discovery, confirming it. And you don’t get to ignore evidence.

The Argument to the Best Explanation (or ABE) is only valid if it is applied to all the pertinent evidence, not just a cherry-picked selection of it. In the most common articulation, the ABE says that a theory that meets more of five criteria is more likely, and the more strongly it meets those criteria than competing theories the more likely it is than they are (I discuss this, and its logical foundation in Bayesian theory, in Proving History, pp. 100-03). Those criteria are:

  1. Plausibility given existing background knowledge.
  2. Simplicity in requiring fewer ad hoc suppositions.
  3. Explanatory Power in making all existing evidence more expected.
  4. Explanatory Fitness in being contradicted by fewer observations.
  5. Explanatory Scope in explaining more evidence than competing theories do.

Theism loses on all five, and by a lot. It’s implausible (no existing evidence establishes such causal powers exist; whereas vast evidence establishes that all proposed natural causes commonly exist). It’s not simple (God has extraordinary specified complexity without any plausible explanation of why such an extraordinary entity would exist). It has dismal explanatory power and explanatory fitness (almost nothing we observe is really what is expected on that theory, but quite the contrary; so not only are tons of our observations poorly predicted by theism compared to naturalism, but a ton of them even explicitly contradict it—yet those same observations are entirely predicted by naturalism). And it lacks explanatory scope (you have to leave out a lot of evidence to get theism to fit anything; whereas naturalism explains all that other evidence, and thus has broader scope than theism). Attempts to get around this utter failure of theism as an explanatory framework always involve inventing excuses to explain away its failures, making it even more ad hoc, which actually reduces rather than increases its probability (an effect that also runs counter to our intuition, yet once again our intuition is wrong here, as can be mathematically proven).

As with most apologetics, when you put the evidence back in that it left out, the conclusions flip the other way around. In actual fact, blind, unintelligent ordering forces are commonplace (from star formation to crystallization to the hydrological cycle; even most physical laws, like Inverse Square Laws and the Laws of Thermodyamics). And what Darwin discovered, and which has since been proved with a massive and diverse array of evidence that he didn’t even know about (but did effectively predict would be discovered, which is how we know he was generally right, even if some of his more particular rather than general ideas have turned out slightly differently), is that there is a blind, unintelligent ordering force behind the organization of life: natural selection. We have since discovered similar forces at the cosmological level: most of what were once thought to be unexplained brute facts, “fundamental constants,” have turned out to be inevitable consequences of blind ordering forces, like the boiling point of water, or the location of life on Earth rather than, say, Pluto.

By contrast, none of the evidence we have confirming that life today was designed by the blind ordering forces of natural selection exists for watches or computers or buildings. There is no inevitably replicating DNA inside them (or anything the like); they don’t sit at the end of billions or millions of years long processes of change from simpler to more complex forms (there are no fossils of primordial primitive watches and cars predating humans and increasing in complexity over eons); and they are not arranged indifferently to human interests (we had to breed domesticated animals and plants; likewise watches and buildings have no causal explanation but to serve human needs—they are not built simply to reproduce and survive and thus maximize reproductive differential success).

Consider that life started as a single, simple cell, then developed from simpler formats into more complex over millions of years (from a PNA world, to an RNA world, to DNA, to cellular machinery). Only then did life slowly become cooperating colonies of these cells. And all life today arose only after billions of years of slow, meandering development. It was a billion years before multicellular cooperation; another billion years before that could produce differentiated tissues; and millions of years more before that could be scrambled into bodies; and millions of years more before those were shaken out into the most adaptable forms; and millions years more of meandering experimentation with those before humans finally arose. And humans aren’t even the first sentient intelligences—several species predating us had already invented culture, technology, and religion. This is all exactly what we expect to observe if natural selection, and not intelligent design, produced all present life. Theism explains all this poorly (and only with accumulated ad hoc additions to the theory).

My slideshow goes through all these details. But my point is, an Argument to the Best Explanation means best explanation, not “easiest to imagine” or “simplest to write down on paper.” The theory has to explain all the evidence; it has to make all of it more likely; it has to contradict less of it; and it has to do all this with fewer ad hoc suppositions; and on a basis of more well-established background examples, by appealing to forces and facts we already know exist—like all the chemistry, physics, and mathematics of probability that natural selection is built on; and unlike “invisible monsters and magic,” for which there is no establishing evidence, even for their existence, much less for what they are like and thus what they would ever likely do.

Thus the Argument to the Best Explanation actually gets us to life being “designed” by natural selection, a blind ordering force, not by “an intelligence.” Indeed it gets us there in even more ways that I just described. For instance, from a design perspective, human brains make no sense on theism, but they are literally the only way we could exist without theism, and are entirely predicted in every particular by evolution. Yet evolution by natural selection requires vast numbers of reproductions, mutations, and years. Gods do not. So finding out that our life and existence required those things is evidence for evolution, not God. Evolution explains more facts. It makes them more likely. It requires no ad hoc suppositions, nor forces or entities not already proved to exist. And it is contradicted by no observations. All unlike “intelligent design.”

The same conclusion follows even for biogenesis. There, it is not natural selection, but just chemistry and the infinite monkey theorem, that blindly generated the first life. This theory requires a vast universe (trillions upon trillions of planets mixing chemicals and environments at random) and vast time (running random mixes over and over again for eons) and the underlying components (biochemistry and astrophysics), and will result in extremely infrequent results (the universe will remain almost completely uninhabited, indeed outright lethal to life; and life will appear extremely rarely, in both time and space). So finding out that our life and existence required those things is evidence for natural biogenesis, not God. It explains more facts. It makes them more likely. It requires no ad hoc suppositions, or forces or entities not already proved to exist. And it is contradicted by no observations. Unlike “intelligent design.”

Objecting that we don’t know yet which process exactly got our life started (of the many we know could; and no doubt many we haven’t thought of yet) has no effect on this application of the Argument to the Best Explanation, because we know even less how or why any god would exist and do this—at all, much less in this one peculiar way that just happens to be the only way it could happen without a god. So theism still scores more poorly even on that metric. As ad hoc as origin of life theories might be, theism is even more ad hoc than that, not less.

In the end, what creationists will stumble on is the fact that, for example, modern intracellular communication and machinery is mind-bogglingly complex and sophisticated, so how could that have just happened by chance? But they are making two mistakes here. They are confusing the first life and its early descendants with a modern life-form, which has actually evolved for four billion years—in fact, because single-celled organisms replicate and mutate far faster than human or animal life, it has actually undergone over three billion times more evolution than humans have. Thus we should not be surprised modern cells are as complex and sophisticated as human bodies. Indeed, after subtracting the cells themselves from consideration (so we aren’t counting them twice), we should expect them to be vastly more complex and sophisticated than human bodies are by themselves. Hands. Eyes. Brains. A dawdle in evolutionary time compared to what cells themselves have undergone.

Conflating “the first life” with modern “intracellular communication and machinery” is thus a common error of creationists. The latter is highly evolved. What got it all started will have been vastly simpler than that. Modern cells have evolved for billions of years. They are the most evolved machines on Earth. The first life (whether arising in a simple naturally-occurring cell or not) will not have been as impressive; and thus it will have been easier to hit upon by chance accident. And when you observe that these dice have been rolled countless trillions of times across an interdimensional expanse of billions of years and lightyears, the fact that there will be the occasionally extremely rare success is expected, not weird. To the contrary, it would be weird if it didn’t inevitably happen given those scales.

When we correct this error, and the one of not including all the evidence when applying the Argument to the Best Explanation, we don’t get intelligent design, but blind ordering forces as the most plausible explanation of life, and thus our existence, by far.

Isn’t Design at Least Still a Simpler Explanation of Fine Tuning?

This one comes up a lot because, as even Christopher Hitchens explained, the Fine Tuning Argument is, really, the only argument for God Christians even have. Everything else is so obviously illogical, or can be immediately disproved with established facts and science. Hitchens noted its still easily refuted, but you still have to think about it; it requires study. The coincidence of fine-tuning is a harder problem than explaining the complexity of life because, unlike the complexity of life, we can’t even establish that fine-tuning exists. We have only one example (just the one universe we’ve observed so far), and we don’t know why the constants are as they are—so we can’t determine what other values they even can have, much less which values are more likely, or what events have to transpire to make them what they are (and it is really the probability of those events we need to get at).

For instance, it was once thought the boiling point of water was a finely tuned fundamental constant; but now we know it is an inevitable outcome of more fundamental constants. Its probability turned out to be 100%; hence not even low, much less amazingly. No intelligent design was thus required. Those more fundamental constants now include things like the mass of the up-quark. But it is not likely that that is just a magically assigned value. Like the boiling point of water, all past cases tell us it is most probable that the mass of the up-quark is determined by something else more fundamental. Indeed its probability is again probably 100%, given those other things—whatever they are; we just don’t know what they are yet. Since we don’t know, we can’t really say “it’s improbable.” That would be claiming to know something we don’t, a textbook fallacy of Argument from Ignorance.

As another example, consider the fact that we now know the Speed of Light and the Gravitational and Planck constants don’t actually exist. It turns out it is not possible to have any universe with different values for them. This is because all they turn out to be are unit converters, from arbitrary human units (feet, seconds) to absolute natural units (the smallest physically meaningful unit of length and time). Because these constants (in reduced form) all equal 1 when using those natural units, and thus vanish from all equations. So any change to those natural units always ends up making 1 again. For example, c (the speed of light) is always “minimal length over minimal time,” because, by definition, you can’t move faster than that (functionally, smaller divisions of time or space literally don’t exist). So no matter what the size is of either unit, the fastest you can always go is one unit per one unit. Hence if you doubled the natural unit of space, we would never notice, because the minimal unit is always the minimal unit. It’s like if you doubled all lengths: well, your ruler would also double, so it would still measure an inch as an inch. You’d never even know anything changed. Likewise with time.

Are there any truly fundamental constants then? What could be different? I already mentioned, with the up-quark, the problem with determining this for the masses (and other properties) of what we so far think might be fundamental particles (all the underived properties of the Standard Model). But another candidate is the alpha constant, which ultimately determines the strength of the electromagnetic force, and is around 1/137. So every time you hear an apologist claim “the relative strength of the gravitational and electromagnetic forces is finely tuned to make life possible” (owing to the purported effects of any different ratio on star and chemical formation), what they really can only be referring to is changes in the alpha constant. Because as just noted, there is no real gravitational “constant” to change.

But how do we know alpha is not like gravity? Maybe it, too, derives in some more fundamental way from natural units, or some other geometry of spacetime, such that it, too, could never be different? Or what if changing it actually changes gravity, because we don’t really know what it is or what it affects or what determines its value? Since it is proportional to the smallest electron orbit (for example, as measured by that electron’s orbital velocity in ratio to the speed of light), it certainly would appear to be determined by some fact of geometry. It might indeed be the case that it couldn’t change, or changing it would change something else, producing no net difference in effect. Or even if changing it did have an observable effect, we still can’t actually predict what that effect would be, because we don’t know what else that would change, or require changing.

These problems multiply. Since any exact ratio of any two changeable constants produces the same universe that looks identical to ours (since it is only a deviation in the ratio that would produce a different outcome), it follows that there are infinitely many conjunctions of these values that produce the same ratio and hence the same universe. If you multiply both by ten, same ratio; by a million, same ratio; and so on, forever. So there are infinitely many possible conjunctions of fundamental constants that produce universes that look exactly like ours. Yet alas, we have no sure way to calculate the odds of an infinitely common outcome amidst another infinitely common outcome. So we can’t even say such a conjunction is unlikely. If you reach into a hat with infinite possibilities, and infinitely many of those possibilities are life-bearing universe like ours, how unlikely is it that you’ll pull out one of those?

We literally do not know how to answer this question. In well-defined cases we could do it with sampling (randomly sample a finite space of possibilities and calculate the expected finite frequency in that space), but this is not a well-defined case. How many forces or particles could there be? What about universes with eight interacting forces; or ninety? Or only three? Or even more kinds of particle than we know? Or fewer? Or completely different ones, maybe vastly lighter or vastly more massive? What about all the infinite different combinations of these? What about different distributions of them? Maybe some values or conjunctions are more frequent than others. The possibilities, conceptually, are not just infinite, but beyond our comprehension. The probabilities thus become incalculable.

And then there is the problem that nearly all successful, scientific, peer-reviewed cosmological theories to date entail a multiverse. Vilenkin’s eternal inflation theory, Linde’s chaotic inflation theory, Penrose’s conformal cyclic theory, Hawking’s no-boundary theory, most black hole cosmologies, pretty much any quantum fluctuation theory (where, for example, a new randomizing Big Bang is always statistically inevitable on any unending timeline, repeating without end), and so on. They all entail multiverses. Which solves any fine-tuning problem automatically. As noted, all the evidence we have indicates the selection was indeed random. So these theories are actually rendered more likely by that evidence (see Six Arguments That a Multiverse Is More Probable Than a God and The Problem with Nothing). It is actually a remarkable coincidence that almost all the cosmological theories that fit the complex observations we’ve made to date also end up predicting there will be boundless numbers of universes. What are the odds that that would be the result, when this result just happens to explain fine tuning as well?

Finally, all the fundamental constants could be entailed by one single parameter: the number of open and closed dimensions of spacetime. For example, Superstring theory explains (as in, makes 100% inevitable) all known fundamental constants to date, by positing that everything that exists is composed of vibrational states (“strings”) within a spacetime comprised of our familiar three open dimensions of space and one of time, and a number of compacted dimensions (which curve back around on themselves at a scale too small for us to notice, called a Calabi-Yao manifold). Once you set that parameter to a certain value, all known physical constants become exactly what we observe them to be. So, what if that’s really all there is to it?

This isn’t a claim that this is what there is; rather, this is just a hypothesis every bit as speculative as “God did it.” Except, it’s less so, because we know the entities in question exist (curved dimensions, ripples in spacetime) and indeed fundamentally exist (see The Argument to the Ontological Whatsit and Superstring Theory as Metaphysical Atheism). This would mean there is only one fundamental constant: the Calabi-Yao ratio, the number of macrodimensions to microdimensions that our local universe is composed of. Which may simply be a random product of chaotic inflation, or any other plausible cosmology, and thus simply a randomized number across an infinite multiverse; and then we just observe ourselves in this one because its the only kind of Calabi-Yao space that would produce observers in the first place. Sure, we have not proved this to be the case. It’s total speculation. But neither have we proved it not to be the case. We cannot claim it is or is not the case. And yet by positing nothing not already known to exist (just dimensions and vibrations thereof), it has a higher plausibility than any God theory can claim (for which we have established none of the required ontologies).

So, we don’t really have the knowledge we need to answer questions like, “How probable are the observed constants?” I’ve covered all this before (see Barnes Still Not Listening on the Bayesian Analysis of Fine Tuning Arguments). Once again, “common sense intuition” is actually overturned by logical reality here, because an intelligently designed universe wouldn’t need to be finely tuned; whereas among undesigned worlds, only finely-tuned ones will generate observers and thus ever be observed. Which means fine-tuning is actually evidence against the existence of God (see Why the Fine Tuning Argument Proves God Does Not Exist).

In other words, fine tuning could have arisen by design or by chance. Yet all the evidence we observe is evidence we expect if was chance; none that we expect if it was design (again, see Bayesian Counter-Apologetics). So, just like evolution, what we see actually supports the chance hypothesis, regardless of how improbable you think chance to be. And God already starts out more improbable, so the relative improbability of chance tuning does not favor God either (see A Hidden Fallacy in the Fine Tuning Argument). After all, what’s more likely? A finely tuned invisible monster that exists for no reason; or a finely tuned accident (of either one or infinite randomly manifest worlds) that exists for no reason? The evidence all points to the latter.

Once again, once we correct the creationist’s errors of intuition, and realize all the evidence we have is what we expect on fine tuning arising by chance accident rather than design, and further realize that they are actually leaning on a just as impossibly improbable lucky accident (the unexplained existence of a remarkably convenient God), in fact one even more impossibly improbable (because unlike multiverses or Calabi-Yao manifolds, gods require positing things we can’t event establish to be possible, much less actual), when we then apply the Argument to the Best Explanation to all the evidence, we don’t get intelligent design, but blind ordering forces as the most plausible explanation of even the “fine tuning” of the universe itself. But all this requires knowing the physics and the math. And apologists often play on the ignorance of their marks; they are counting on you not knowing the physics and math. But when you ask actual scientists, few are impressed by the fine-tuning argument, for all the reasons I just related (see SkyDivePhil’s video Physicists & Philosophers critique The Fine Tuning Argument and its follow-up Fine Tuned Universe: The Critics Strike Back).

Can They Really Be Wrong about the Math?

Creationists have their own mathematicians, of course, ranging from Donald E. Johnson to William Dembski, who try to dazzle an uninformed public with probability calculations. But the public rarely understands what they are doing or on what shaky premises it rests. Math looks great, but if it’s predicated on bogus premises, it will be wrong no matter how “correct the math” is. I’ve noted this before (see Crank Bayesians: Swinburne & Unwin and Crank Bayesianism: William Lane Craig Edition). Many of course also get the math wrong. I’ve published on both points under peer-review before (“The Argument from Biogenesis,” Biology and Philosophy 2004). But when competent mathematicians are involved, it’s at the stage of bogus factual premises where the mistake usually happens.

Dembski, for example, isn’t wrong on the math; he’s just wrong on the scientific facts. He calculates that structures to a coincidental probability of 1 in 10^150 can arise by chance on the scale of observed cosmic scope and time. He’s right. Alas, we know of self-replicating molecules whose probability of chance assembly is below even 1 in 10^50, much more 1 in 10^150. So he is out of luck arguing biogenesis isn’t a chance accident. On Dembski’s own math, life could have arisen by chance accident more than 10^100 times in this universe already, and that’s just given the observed universe—the actual universe is vastly larger than we can see. So no hypothesis of design is needed. I suspect Dembski knows this, which is why he avoids the biogenesis argument, like the plague. He tries to use his math to argue against evolution instead. But what he does then is rely on a scientific fraud: Michael Behe.

What Dembski does is claim there are structures within modern cells that must have arisen by random chance exceeding those threshold odds he calculated (of 1 in 10^150, which he correctly translates to about 500 bits of spontaneously-arising information), trusting Behe’s claim to have found this (which he calls “irreducible complexity”) in some cellular structures. But Behe has never produced any actual scientific evidence for this conclusion (see my slideshow again for the science he didn’t do); and it has has been decisively refuted. No such structures exist. Behe was trying to pass off provably evolved structures as spontaneous structures. But none of those structures required the 500-bit leaps that Dembski’s math requires; they can be accomplished with staged leaps of far less than 50 bits, over eons of time, which is all well within random chance. (You can learn all about this, by the way, along with expert refutations of every other bogus creationist argument, at the Talk Origins Archive.)

So a third common error of creationists is to dazzle people with correct math, but from bogus premises. You have to have the premises solid and established before any math can save you. Fine tuning arguments will often commit this same mistake, starting with premises about “the probability” of certain conjunctions, or even their possibility, that actually are false; and when they are replaced with true premises, the same math either gets different results (for example, the odds of life arising are the odds of its arising anywhere in the universe at any time, not the odds of it arising on one pre-determined planet at one pre-determined time; likewise the odds of human intelligence arising are the odds of any comparable intelligence arising, not the odds of it specifically being in bipedal apes), or else it renders results impossible altogether (like dividing infinity by infinity, or dividing by unknown variables).

Conclusion

It doesn’t matter how correct your math is, you have to be working from true premises. And to apply the Argument to the Best Explanation, to either fine tuning or biogenesis or evolution (and those are three different things, which you must not confuse together), you have to apply it to all the evidence and the entirety of your hypothesis, not just to select, cherry-picked pieces of it. You have to calculate the probability of your alleged originator—what are the odds on there even being a God? You have to calculate the total odds (after all, the universe is vast and old and filled with an ungodly number of worlds, and we just need one chance spark in all that to explain our being here). And you have to account for all the ancillary evidence, too, such as: Does this universe have features that are expected if it arose by chance accident, and that are not as expected on intelligent design? Indeed, does our universe in fact look exactly like a universe would have to look if it had no engineer? And if the answer is yes (and it is), why would an engineer make it like that? Isn’t the simpler hypothesis at that point that it simply is what it looks like? Well, yes.

Natural-cause theories are more plausible, because they are built out of things we know exist, unlike intelligent design theories. Natural-cause theories are simpler, because they require fewer things just “made up out of whole cloth” to get all the required predictions of observations. Natural-cause theories have more explanatory power, because they make what we observe highly probable, indeed it’s exactly what we should expect to observe if those theories are true. Natural-cause theories have more explanatory fitness, because they aren’t contradicted by any evidence, unlike intelligent design theories, which are contradicted by an abundance of observations. And natural-cause theories have more explanatory scope, because they explain so many more observations, including really random and weird ones, than any intelligent design theory does. After all, natural cosmology predicts such oddities as the existence (and strange ubiquity) of black holes, and the temperature and structure of cosmic microwave background radiation; natural biochemistry predicts all life would be descended from oxygen-intolerant prokaryotic cells; natural evolution predicted there would be whole regimes of extinct species in distant eras; and so on. “Intelligent design” theory makes few effective predictions and explains few unexpected oddities. This is why it has never passed peer review in any pertinent scientific field, whereas countless natural-cause theories, of both life and the properties of the universe, have.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading