A lot of what happens in the universe is caused by the Laws of Thermodynamics. Christians often wonder where these laws come from. What explains them? Well, the interesting thing is that nothing explains them. And I mean that in the literal sense: the absence of anything to interfere with the order of things causes all the laws of thermodynamics to be what they are (at certain scales). This means that all godless universes will appear to obey these laws—they are a logically necessary outcome of complete randomness. So every random chaos will always obey these laws—it is a logically necessary fact. To be clear, that is only a conditional statement. It is not logically necessary that every possible universe will obey them, because one way to negate them is to add some sort of intelligence or countermanding law that forces something else to happen. So the significance of these laws’ emergence from randomness is that their existence proves nothing exists to force a different outcome. Which does not bode well for any desire to argue there is such a force (like, say, God).

This also means that the fundamental randomness of quantum mechanics (and whether it really is fundamental or not won’t matter to the point) causes the laws of thermodynamics to operate at scale. The reason these laws do not apply (are “violated”) at quantum scales is that at that scale there isn’t enough aggregation of random events to produce them. Since those laws are only produced by a massive aggregation of random events, the theory that they are an inevitable product of randomness predicts that they will not hold at smaller scales, where sufficient aggregation is not occurring. So the observation that they don’t confirms the theory that this is where those laws come from. And indeed averaged over time (which is back to a large scale again) they still do conform to these laws. They therefore require no further explanation (like “God did it” or “they are logically necessary at all scales”).

Think of this like rolling dice. A long string of 6s may seem a defiance of randomness until you zoom out and see the entire history of that die being rolled and then realize it’s still just a sea of random results. Zoom in here or there and you’ll see what look like violations of random order, but that’s only because you are looking at a small scale; the “law” (that dice roll random numbers) only governs (only describes) the large scale. Because it’s a statement about the overall system, not individual events. Yes, we can say the odds of one individual roll turning up a 6 will be 1 in 6. But that’s still 1 in 6, not 100%. The law does not predict it won’t be a 6. It only predicts it will be less likely to. But “less likely” only has semantic meaning across a large number of rolls (actual or potential). It is a comparative statement. It is saying “a 6 is rare in the context of a large number of rolls.” It is not saying “you won’t roll a 6.”

More to the point, saying the probability is 1 in 6 amounts to saying that as the quantity of rolls increases, the number of 6’s rolled will approach a sixth of the total; and that the average of all rolls will approach 3.5. But there will still be points in time where these numbers will be above that target, then below it, and so on, stochastically all the way along. The statement that “this die rolls a 6 a sixth of the time” does not mean exactly one sixth of all rolls at all times will be a 6, or that the average of all rolls will always be exactly 3.5. There will be random variation around those targets. And the results will only approach those values over time. Like Global Warming: that does not say every single day will be hotter than ever; there will be some unusually hot and some unusually cold days, and hotter and colder days; but on a long measure, the trend keeps going up, even as randomness dances around that trendline.

The Laws of Thermodynamics are, essentially, just a description of that trendline and not absolute “magical” laws that always force the outcomes they predict without fail.

The Laws of Thermodymics

The laws of thermodynamics have technical scientific-mathematical definitions. And indeed what I shall be arguing today derives from the even-more-technical field of Quantum Thermodynamics. But in very colloquial terms:

  • The Zeroth Law says: If two systems are both in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. (So, a system cannot be hotter or colder than itself or have more or less entropy than itself, and so on.)
  • The First Law says: Energy can neither be created nor destroyed, only altered in form.
  • The Second Law says: The energy in a closed system becomes more disordered over time. (Or as more commonly heard, “Entropy increases over time.”)
  • The Third Law says: The entropy in a closed system approaches zero as its temperature approaches (absolute) zero.

The Zeroth and Third laws are more or less definitions. In order for two systems in equilibrium to be described as in equilibrium with a third system, there cannot be any difference in, for example, the overall temperature or entropy between them. Colloquially, that’s just what equilibrium means: an overall evenly distributed state. The physical effect is that these systems placed in contact with each other will have no net effect on each other in terms of energy distribution. This is a product of statistics as we’ll see; because it really is just a consequence of the First and Second laws. Likewise, the Third law simply describes the minimum possible entropy as the minimum possible temperature, which is a tautology owing to a semantical fact of entropy: since entropy is a measure of how many possible states of a system are functionally the same, and systems at absolute zero have only one possible state, that is by definition the smallest possible entropy.

There are lots of technical asides one could add here (Wikipedia has whole articles on each Law), and one can nitpick all the semantics of my description. But I am deliberately being colloquial and oversimplifying and leaving details out because they don’t affect my overall point, and my goal is to help everyone understand that point. The gist is that “smallest temperature” logically entails “smallest entropy” owing to the definition of entropy; and the definition of “equilibrium” entails the entropy equivalence of all systems with the same entropy, and this entails that (in conjunction with the First and Second laws) systems in the same entropy state when combined will remain in the same entropy state (their individual state simply becomes the combined system state).

Which means really all we need to explain are the First and Second laws.

The Second Law of Thermodynamics

The Second law is the easiest to explain—because it was proved the inevitable outcome of randomly-operating systems in the 19th century through the science of Statistical Mechanics (and in particular Statistical Thermodynamics). I am of course not here talking about bogus Christian versions of this “Law of Entropy,” where they confuse “entropy” with “order,” and “total system” with “locally within a system,” and “closed system” with “open system,” and so on (for examples of these scientifically illiterate confusions causing error, see Justin Brierley on the Science of Existence and Psychology Today: Lame Shill for Medieval Godist Dribble). The actual law only entails that the energy across an entire (closed) system tends to become more disordered over time. This doesn’t prevent local spontaneous ordering, as long as the process causing that leaves the whole system more disordered overall. For example, crystals form naturally (local ordering) but only by radiating heat (an increase in disorder), such that when all is said and done, while order has increased locally, across the entire system there is more disorder than there was before the crystal formed.

What was proved in the 19th century is that this follows necessarily from the First law (which we’ll get to next) and the inevitable outcome of randomization. Think of a deck of cards. Every time you shuffle the deck, it becomes more disordered. Not because of any special law of physics, but simply because that’s what happens when you start randomly moving things around. Without any intelligent intervention (like a card shark) or ordering force (like heavier cards in result tending toward the bottom of the deck in a gravitational field), this is simply what happens. Randomizing does that. There is no further explanation needed. And as there are only three possible causes of any system’s change of state (intelligent intervention, other ordering forces, and random chance), absent intelligence and other countermanding forces, shuffled decks simply get randomly ordered.

This is true of atoms in a container, or anything whatever: if energy is being mixed around in a system at random, it inevitably becomes more disordered, simply as an inevitable consequence of randomization. Even when there are ordering forces (card sharks; weighted decks; crystallization), they are never perfect (nor can they ever be better than perfect), so some accidents here and there keep the randomization going, even if slowing it down. Even the most locally efficient machine in the universe—like an ideal heater (where “heat” is the goal and thus even “waste heat” counts as “work”) you still get leaks, like some energy spent on light (photons) and sound (vibrations), that eventually get frittered away into the environment, and thus that energy gets randomized; so you couldn’t even recapture all that heat to generate electricity all over again to keep that heater running forever. There is always a loss budget, as, inevitably, creeping randomization disorders the energy flowing through the system, dissipating it across the environment.

No further explanation for this so-called “entropy law” is needed. So really, there is no “law” of physics here. The Second law is simply describing what happens to any system of moving parts (the indivisible quanta of energy, whatever they are) in the absence of any laws. Take away all laws of physics, and this is simply what happens. This has important consequences. First, the Second law isn’t even deterministic really. It is probabilistic. There is no “law” that entails systems are “always” more disordered from one moment of time to the next. That is simply what we observe because of probability. It is actually entirely possible for a system’s entropy to spontaneously increase—by shear chance accident. It’s just improbable that it will. Not impossible. And even then, as the die keeps getting rolled, any such anomaly is unlikely to last long enough to matter anyway. If the trendline is up, then a rare cold summer day will get washed out by a greater number of brutal heat waves, and the average will just keep going up.

Think of the die rolled in my previous example: over time, yes, the rolls of the die will accumulate into a sequence more and more random. But that doesn’t prevent sudden long runs of 6’s (spontaneous order). That totally could happen. And in fact, we can calculate a rate at which it will inevitably happen. It’s just that if you keep rolling, those runs will revert back, and the sum randomness of all the die’s rolls will tend to increase anyway. And since the “systems” we are usually talking about when discussing the Second law are massive, containing astronomically vast numbers of “die rolls,” any significant spontaneous ordering, while happening all the time, won’t be visible to us. Those anomalies will be relatively fleeting and small, and thus swamped into obscurity by the gargantuan system overall, averaged away below any margin of error. So while little pockets of concentrated atoms will appear even in a contained gas at equilibrium, and this will happen constantly, these anomalous pockets will be microscopic in scale, compared to all the atoms in that container, and rarely last. Meanwhile the average of all of their velocities and positions and distributions will remain the same over time, with fluctuations only at the most distant of decimals. The Second law is really just a Law of Averages.

This is why nothing need cause or explain the Second law. Since it simply just describes an inevitable mathematical fact of any randomly interacting system, it will operate in all universes where no extra other “thing” intervenes to prevent it (indeed, quite a lot of laws of physics can be explained this way, as argued by Victor Stenger in The Comprehensible Cosmos; see also my article All Godless Universes Are Mathematical). This was proved just with classical mechanics. But it follows as well from quantum mechanics. Because it does not matter what the randomizing source is, whether some fundamental physical fact of the indeterminacy of the location and momentum of quanta, or just shaking a container of deterministic atoms (like shuffling a deck of cards), or even just sitting there and watching what happens as the atoms follow their deterministic, yet continuously shuffling, courses. Like rolling a die a billion times and averaging the result.

In short, randomness entails the Second law. Which does have important consequences at different scales. At small scales, as noted, the “law” will often not apply, just as with single runs of dice: when you look at small scales, where the number of quanta involved are few, you will regularly see entropy reverse. It’s only when you zoom out and look at a very large number of quanta that the average tends toward conformity to the law, so much so that we mistook it as “just a law,” some unexplained “fact” of nature. Now we know it isn’t really. It’s just an inevitable outcome of statistical probability working on large quantities of things. So at larger scales, we see the law pretty much always holds, enough to cover all cases relevant to us. But zoom out further, and this all reverses again: because on any indefinite timeline, all probabilities approach 100%. This means that even extraordinarily improbable events, like all the quanta in a cloud of gas spontaneously forming into a live rabbit (or a Big Bang, producing a whole new universe), will inevitably occur. The only question is at how large a scale can we expect to see this.

The Second law, not actually being a “law” in any magical sense, can’t prevent this, and won’t, because it is not some “power” pushing entropy up, it is just a law of averages; but in any long enough run of die rolls, any possible sequence of rolls will eventually appear. Like, say, the entire works of Shakespeare in base-6 notation. The only reason this “doesn’t happen” is that the human species has not lived long enough to ever see something like this happen, because it is so improbable—in other words, so rare. In fact it would probably take gazillions of years of rolling a die to ever get to results like this. But get to them you will. It is mathematically inevitable.

This is true on both classical and quantum mechanics. In quantum mechanics, everything has a calculable probability—whether a rabbit spontaneously forming in front of you, or your finger exploding into a Big Bang. It’s all possible. It’s just that these events are calculably so improbable as a result of random ordering that you can count on them never happening in your lifetime, or probably anyone’s—unless humans stick around to continue observing things for countless trillions of trillions of years. The quantum probability of any point in spacetime exploding into a Big Bang, for example, has been calculated to be on the order of 1 in 10^10^10^56. Which is as near to “never” as makes any sense to us.

But the same general point is true even in classical mechanics. You don’t need quantum mechanics for it to be the case that a gas in a container can spontaneously form into a living rabbit (provided there is enough energy in that container to produce a whole rabbit) or for a universe long passed its heat death to, just at random, coalesce all its remaining particles at a single point dense enough to ignite another Big Bang. On quantum mechanics even that might not be required (as we’ll see next with regard to explaining the First law). But on classical mechanics, enough stuff, plus enough shuffling, entails every possible result. Shuffle a deck of cards long enough and eventually you will randomize it into a complete starting sequence from Ace of Spades to Ace of Hearts, just like it came out of the box when you bought it.

This is what led to opponents of Ludwig Boltzmann to argue that even so-called “Boltzmann brains” can randomly form anywhere in any universe—and not just “can,” but always will, if the universe continues to exist forever (or even just long enough). By extension, that’s also true of Boltzmann planets, Boltzmann solar systems, Boltzmann galaxies, and even Boltzmann universes (and indeed our universe could well be one). This doesn’t have any of the consequences you’ll hear Christians go on about (see my discussion of The Boltzmann Brain Argument and The God Impossible). But it does have consequences they don’t want to hear about (see my discussion, for example, in Koons Cosmology vs. The Problem with Nothing and My Debate with Wallace Marshall). Or as Sean Carroll and Jennifer Chen put it: leave an empty singular point of spacetime sit long enough, letting its transformations be governed by nothing other than pure random chance, and it is guaranteed to spontaneously produce a Big Bang. Hence we have no need of a God to explain it.

In any event, this is what we mean by “the Second Law of Thermodynamics isn’t really a law.” It does not (and cannot) apply to everything all the time. At super-small and super-large scales, it will be violated. But even calling this a “violation” is inaccurate. Because the Second Law is just a Law of Averages. As such the “law” remains true even with extreme deviations from the average, because an average always is just the average no matter how many anomalies can be found within the whole. The Second law thus actually predicts its own violations. Since all it is about is the effects of randomness at scale, it already accounts for deviations from its predicted averages at ultra-small and ultra-large scales. Small scales lack enough quanta for the average not to be swamped with random anomalies (which, being at a small scale, will also be small); while large scales have too much quanta to prevent random anomalies (which, being at a large scale, will also be large). And that’s simply the physical reality of this law.

The First Law of Thermodynamics

That leaves the First Law of Thermodynamics. Is this just some sort of God-ordained fact, such that had God not said it, then this law would not apply and things could just pop into and out of existence at random? Energy (and thus matter) could then just come to exist, or cease to exist, willy nilly? Closed systems can just spontaneously increase or decrease their energy? Or is this some logically necessary truth even God could not thwart, such that even in the absence of God energy cannot spontaneously form into existence—as most Christians assume when they claim this can’t happen without God?

Of course, these are contradictory positions. If God is needed to make this a law, then you cannot claim the law would govern what happens without a God. Then the spontaneous appearance of a random amount of energy is inevitable, isn’t it? No God, no Law of Conservation. No Law of Conservation, and nothing will not remain nothing—because that would then be the least likely selected state among all randomly selectable states (see The Problem with Nothing: Why The Indefensibility of Ex Nihilo Nihil Goes Wrong for Theists). What would instead be most likely is a gargantuan amount of energy spontaneously arising. Because if you randomly pick any quantity between zero and infinity out of a hat, you can count on your result being outrageously large. So, to prevent that, this law would have to just “exist,” and thus be a thing whose existence precedes and does not depend on God. Which would suggest God cannot even violate it. How, after all, could he acquire the power to reverse a logically necessary fact? And if the Law of Conservation isn’t a logically necessary fact, how could it come into existence without God commanding it? And round and round the Christian’s whack-a-mole game goes.

There is a simpler solution. The First Law of Thermodynamics is just like the Second Law: it is simply an inevitable outcome of random chance. All universes will then be governed by it that have no intelligence or force causing anything else to happen.

Think of our previous example: a rabbit spontaneously forming in front of you right now. In terms of the Second Law, that can happen so long as an equal amount of energy (atoms, heat, etc.) randomly stumbles into the form of a rabbit, which as we just noted is not impossible—it’s just astronomically improbable. Think of the specified complexity of a rabbit: trillions of cells in a just-so order and interconnection; each cell vastly complex at the organic level, and even more vastly complex at the atomic level, in all their just-so order and interconnection; and even the atoms are highly complex in their just-so order and interconnection, being made of a specific combinations of quarks and leptons and bosons, with hyper-specific attributes (certainly among all those one could have randomly chosen). Just the probability of this randomly happening will be vastly small, such that it would take trillions of trillions of years for it to likely occur even once in the entire universe (much less anywhere we are looking). Those poor Boltzmann rabbits will almost certainly immediately die in the radiation-filled vacuum of space anyway. But the point is, their complexity of order ensures that their spontaneous probability of assembly is too small to ever see it happen. And we need no further explanation than this for why we don’t see it happen.

Turns out, this fact also explains the First law, not just the Second. Think of that rabbit, only now instead of it forming out of available matter and energy accidentally hurtling into position, it arises entirely out of spontaneously formed matter and energy, in violation of supposedly not just the Second law but also the First. Think through the additional improbability that entails. Not only must a vast amount of energic quanta become spontaneously organized, but every single quantum, each and every one, also has to spontaneously appear—and not disappear. Needless to say, even assuming this were possible and happened all the time, these, we will call them Ultra Rabbits, will be even more rare to see than Boltzmann rabbits. And that, in a nutshell, is why we don’t need anything further to explain the First law, either. It’s just an inevitable outcome of probability. All random systems will obey it, just as much as they do the Second law. Which is to say, almost but not literally always they will. Hence we can expect the First law to have its own continual “violations,” just less often, because it requires more random organization to happen than mere “violations” of the Second law.

Well. Guess what? We’ve proved that’s the case. Spacetime is awash with spontaneous creations and destructions of energy, first predicted as an inevitable consequence of Quantum Mechanics (the virtual particle field), and since empirically proved to actually be happening (just Ask Ethan; or look up the Casimir Effect and Vacuum Birefringence). You might ask why this only happens at the quantum scale, why we never “see” these random creations and destructions of energy particles; why it’s just particles and not, say, “live rabbits.” Why is the First law pretty much adhered to at our scale? The answer is the same as for the Second law: probability.

If you assume as a rule that energy can, and in fact always will, spontaneously form and vanish at random, what you can predict you will see is that almost all of this will occur only at subatomic scales. Because a rabbit is a highly ordered congeries of trillions of trillions of components, at a spontaneous-ordering expectancy of countless trillions of trillions against, so you simply can’t expect ever to see that, even though it must inevitably happen on a large enough scale. And indeed, sometimes virtual particles become real and stick around; and if that happened enough times in exactly the right places, you’d get “a live rabbit.” It’s just that that’s a lot of lucky accidents you need to happen at random, and that isn’t going to happen often enough for anyone to see it. And that’s why we don’t. Just like getting the works of Shakespeare with a continuous roll of a die. Again, there is no further explanation needed.

How Randomness Entails Conservation of Energy

Consider, first, the difference between, say, a single quark or electron (or, indeed, a much simpler photon), and a whole rabbit: if at every moment of time energy will just randomly appear or disappear, these single simple particles are what will appear or disappear a bazillion times more often than a rabbit—or indeed even a whole atom, which consists of numerous parts conjoined in a specific causal structure requiring a rather large amount of energy, far more than just a quark or an electron, and more still than a photon. After all, most of the energy in an atom is the binding energy holding quarks together as a hadron (over a hundred times more than the quarks alone). And more energy requires more quanta—more bits, thus more complexity, thus more luck. So you would have to “luck out” and get “all” that energy, and all in exactly the right place and configuration. Odds don’t favor it.

Now, in every atom, real quarks are exchanging with virtual quarks all the time. This could be just another Law of Averages, but more likely it’s structural—there is some underlying reason, for example, why there are only the quarks there are with only the properties they have, and why particles typically form in converse pairs (for example, an electron and a positron, or a photon and a photon of reverse polarity), that always annihilate each other when colliding, and so on. It’s not random; something is putting guardrails on what can occur, such that symmetry typically is maintained.

What that “is” is still a subject of inquiry. M Theory (better known as String theory), for example, posits that the guardrail is the geometry of local spacetime, such that you can’t, say, convert a gamma ray into an electron without also making a positron, or have an electron and positron collide and remain, because of the underlying geometry. But whether that’s what it is or something else, the general observation remains that there is something structurally limiting what can and can’t spontaneously form or vanish. But within those limits, anything can spontaneously form or vanish, and does so perfectly randomly. The First law has no evident effect on this. It simply doesn’t even describe reality at that level of analysis.

So the First law is like the Second law: it’s just an observation of the average effect over large scales. Disorganized subatomic particles are the simplest entities and thus most likely, by far, to randomly emerge and vanish. Rabbits could, too, it’s just that that’s too improbable to see often enough to matter. Even atoms spontaneously forming or vanishing is too rare to ordinarily witness, because whole atoms are so complex they require far more chance accidents. So this will be rarer than even Hawking radiation, whereby a black hole slowly decays by spontaneously creating particles out of its trapped energy store. While that process usually generates much simpler (lower energy) photons, it does occasionally generate atoms (or rather, ionized protons, which eventually capture electrons to become hydrogen). That’s all following a causal process, and while what forms and when is random, the “stuff” is already there to transform. An atom forming out of nothing at all is far less likely. But the probability is still not zero. Even more importantly, because this is random, for every trillion spontaneous atoms that do form, a trillion more will just as randomly vanish, for no net change when you average over time.

Thus, what is spontaneously appearing and disappearing are the things vastly more likely to, given the assumption that what will appear or disappear is determined at random. In that case, far simpler things will be far more numerous than complex ones. And this pattern checks out. This is why we live in a sea of randomly created and destroyed photons and occasionally larger but still usually fundamental particles, but don’t see many instances of even whole atoms much less rabbits doing this. And the reason we don’t see that sea of random particles popping in and out of existence is that…we are gargantuan. Being vastly complex, we are built out of vast arrays of complex atoms. In result, humans live (and perceive) on a scale vastly larger than bears any probability of spontaneous creation or destruction that we will likely ever see. Instead all the actual spontaneous creation and destruction floats far below our sensory threshold. What we observe is the realm of macroscale events, where the Law of Averages erases from our view all random creation and destruction.

Even in rare (because improbable) instances of real particles vanishing (which requires a lot more energy and thus a lot more complexity to occur by chance), the gap they leave often pulls a virtual particle in to replace it. We never notice the swap. An atom is continually swapping its real quarks for virtual, for example, on physical and time scales below human perception; but on the scales we experience the atom just stays the same. As long as on our timeline there is always “a” quark where it is supposed to be, it does not matter how many times it swaps out with another. To us, all we see is a continuous atom. Likewise, if a stellar gas cloud is losing and gaining an atom here or there at random, the net effect simply isn’t observable on our scale of perception. (Again, it doesn’t matter to any of this whether this is because Quantum Mechanics is existentially just “fundamentally random” or whether there is a hidden determinism underlying its randomness, as with Phase Theory or Many Worlds Theory or Cellular Oscillation Theory, which is akin to the deterministic explanations afforded by String Theory. The observable result to us is the same.)

So you might ask how can all this random creating of energy go on and not have any visible effect. Of course, it actually does have a visible effect. Besides the Casimir Effect and Vacuum Birefringence I mentioned earlier, as it happens, all the forces of physics are an emergent outcome of this same fact. For example, Quantum Electrodynamics entails there would be no electromagnetism without mass spontaneous creation, and disintegration, of virtual photons happening everywhere, all the time. Likewise Quantum Chromodynamics for what holds atomic nuclei together. And so on. We just don’t “see” anything but the results of all this happening. It took us thousands of years to figure out what it was we were actually seeing.

Crucial to this is the fact that (a) it’s all random, (b) it involves vast numbers of particles coming and going, and (c) creation is as frequent as destruction. The result is that particles and antiparticles are being created in equal quantities and thus immediately extinguishing each other. The randomness itself keeps the sea in its place: just a low fringe baseline of chaos on top of which our world is built. To get any of these particles to “stick around” (and thus become “real” rather than “virtual”) something has to intervene to separate randomly created-but-annihilating pairs before they collide, thus “keeping them around.” This generally requires a lot of focused energy, a trick of structural design, or even more enormous luck—as this happening by accident is extremely unlikely, owing solely to the laws of probability and nothing else. No special magical “law” makes this the case.

Real particles, meanwhile (and even virtual ones while they exist), will still conserve specific properties like momentum and spin, because (it appears) once something is caused, it keeps going until something stops or changes it. One might then ask for an explanation of why that happens—why don’t all particles just randomly blink in and out of existence or just randomly change their momentum over time. This we know has something to do with an even more fundamental property of the universe: symmetry laws. And this gets us back to those “guardrail” theories I mentioned before. Stenger explains this with his No Special Observer principle, whereby if we assume there is no special point of view, all symmetry laws can be deduced as inevitable. Graham Oppy explains it with his principle of Existential Inertia: what “is” simply stays the same unless something causes it to change. M theory explains it by appealing to local spacetime geometries. And so on. There are certainly things left to explain. And many theories are on the table.

But at the level of just Thermodynamics, the total energy of a system remains constant because it would be too improbable not to. This does mean at ultra-large scales this law will be “violated,” in the sense that highly improbable events (like noticeable increases or decreases in a system’s energy) will start happening as an inevitable outcome of probability; and at ultra-small scales this law will be “violated,” in the sense that we will see energy being created and destroyed, but when you sum up all the particles of energy being created and destroyed, it sums to virtually zero, and so at middle scales—human scales—it looks like zero, as if no energy is being created or destroyed. The First law is therefore also just a Law of Averages. It manifests because of the scale we live at being so vast in the quantity of things existing and happening, that everything, creation and destruction, averages out to “stays the same,” and vastly more creation an destruction events are, as probability alone predicts, ultra-simple (and thus invisible) than any we would “see.” What we thus “see” looks like the First Law of Thermodynamics.

Conclusion

All the Laws of Thermodynamics are just the observed fact of a Law of Averages. They are produced entirely by randomness. And therefore they will exist in and describe every world with randomly behaving components. And “randomly behaving” here just means ungoverned, “left to their own devices,” components without any intelligence or force intervening to order or control them. Which means all godless universes. This is just one more example of evidence that this universe, the one we find ourselves in, looks exactly like we would expect a universe with observers in it to look if there is no God.

So rather than requiring any God to explain, these laws are actually evidence against there being a God. For a God has no need of these laws—a God can easily make a world consistently violate the First and Second laws. A righteous person prays that a mountain be moved to save lives, and it spontaneously moves. A righteous person prays that food fall from the sky to feed the starving, and it spontaneously appears and falls. That is the kind of world a God could make. Whereas there is only one kind of world we could find ourselves in if there is no God: one that obeys the First and Second Laws of Thermodynamics—and in precisely the ways we have empirically proved our universe actually does and doesn’t.

It is no rebuttal to say that God has some mysterious reason to choose to make our world look exactly like a world would with no God in it, because you have no evidence that this choice (or indeed even the desire for it) is logically necessary for God. And that entails it has a nonzero probability: there is some probability that God would make the world in a way other than this; whereas if there is no God, there is no probability that it would be observed to be other than this. That means the probability the world would look like this is always higher if God does not exist than if he does; and therefore the definition of evidence (as a fact that increases the probability of a conclusion) entails that this is evidence against God, not evidence for God.

And this is logically necessarily the case—because there is no way to get the probability of this result to be 100% on “God” (no matter how many excuses you propose—as they still all have a probability of being false), while it is automatically at 100% on “Atheism” (or as near to as makes all odds). One can then quibble about the prior probability, but trust me, that won’t go well for you, logically or evidentially. But even if you could get around that, it would remain the case that the Laws of Thermodynamics are evidence for atheism—and always will be, even in worlds made by gods.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading