Misunderstanding the Burden of Proof

In matters of knowledge and belief, everything is probability. They who do not understand this, will commit innumerable errors, and waste gobs of time arguing to no purpose. This is especially evident in debates over who holds the burden of proof in any given matter, which debates will go nowhere and shed no light on anything, if they don’t frame what’s being debated as really a debate over what the probability of the disputed claim is, and why.

By which we mean epistemic probability. Not some objective or cosmo-Platonic or physical probability, but the probability that the position we are taking (whatever it is) is true given the information we have at the time we take that position. Physical probabilities have a relationship to epistemic probability, but it is not direct (see my discussion in Proving History, index “probability,” but especially pp. 23-26 and 266-75). But since every position logically entails an epistemic probability, every position entails a positive assertion.

And that means every position, whether assertion or rejection, even ambivalence or uncertainty. All is just an assertion of a probability. Every party to every debate over any matter of fact is making a positive assertion about a probability. Everyone. So anyone who thinks they are making a meaningful distinction worth arguing over when they claim things like “you can’t prove a negative” or “they who make a claim always bear the burden of evidence” is simply wrong. Apart from distinctions of degree, there is no meaningful difference between a denial or a “mere lack of belief,” or between either of those and the assertion of a contrary belief. Every single possible position with respect to any claim whatever, is affirming a probability. And thus every position is a positive position. Every position is “making a claim.” Every position is asserting a belief about something. Even agnosticism.

Disbelief vs. Null Belief

I’ll demonstrate shortly that every position is asserting a belief, whether it’s a denial, an affirmation, or even an expression of complete uncertainty. Every position is a belief. Every belief corresponds to a claim. But before we get to that, two more misunderstandings must be averted.

First, that fact, that every belief-state entails an affirmation of a probability, is the case even if that belief was literally just formed as soon as the debate was engaged. Most things we have no beliefs about until we think of them, and upon thinking of them (such as by being asked to), we evaluate what we believe upon encountering that consideration. But that is not merely encountering a claim contrary to some status quo (of “disbelief,” let’s say). Because when we have no belief, there is no status quo. There is no disbelief in what we’ve never considered. Such a thing is logically impossible. We can only form beliefs upon evaluation of the data informing a belief. And there is no difference in such an evaluation between affirming or doubting what we’ve been asked to consider: both are taking a position, both are making a claim. Both therefore bear the burden of evidence.

This may rankle some, and confuse others. But it will become undeniably clear by the time you finish reading here.

The Fallacy of Ignoring Priors

However, that fact does not give safe harbor to those who want to beat people over the head with it. Because many times, those who gloat that everyone bears the burden of proof, ignore the fact that a substantial burden has in fact already been met—in background knowledge. This is the fallacy of ignoring the priors. “Priors” meaning the probability already entailed by our background knowledge (often represented by “b” or “k” in a Bayesian equation). Every claim has a prior probability. Every claim. Both positive and negative; both certain and uncertain.

I have demonstrated elsewhere that every debate over any claim to fact is ultimately Bayesian, and that no other means of justifying the epistemic probability of a claim is valid that does not simply recapitulate, approximate, employ, or reduce to Bayes’ Theorem (see Proving History, pp. 106-14; with also pp. 97-106). Background evidence is evidence. So there is simply no avoiding the fact that already-considered background information already entails a probability for any claim’s truth. And we call that a prior probability.

When someone is given some “new” information (meaning, new to them, which can simply be something they never considered before, or even entirely new data), the question then becomes how much or in what way that new information changes, or “updates,” that prior probability. You then have a “posterior” probability: the probability that a claim is true given all the information you have; which must include not just background information (“b”) but also anything that’s just been added to it or called to your attention (which often gets designated with “e” in a Bayesian equation). Those two sets of data, b and e, together must encompass all knowledge, all data, all information, all evidence available to you. Bayes’ Theorem then dictates what probability is then entailed thereby.

And what we learn from this fact, is that all arguments over any claim to fact, are really just arguing over three numbers. In fact, sometimes, the dispute is solely over one lone number, a single probability, as often the parties to a debate will agree (or concede agreement) on the other two. Those three numbers are the likelihood of the new evidence on the claim being true, the likelihood of that same evidence on the claim being false, and the prior probability of that claim being true—or false, since each, the prior probability of it being true and the prior probability of it being false, is the logical converse of the other and therefore only one number describes both. Which is why all negation is assertion. Saying a claim has a low prior, is literally synonymous with saying it’s negation has a high prior. Either way, you are making a positive assertion. (See my old article Proving a Negative for a more colloquial discussion of this mathematically unavoidable fact.)

I won’t go into the mechanics of Bayes’ Theorem further here. Apart from my book Proving History, which goes into considerable detail about how Bayes’ Theorem works and how to apply it, I have also written numerous helpful blog articles, if you want to learn more about it. Here, the principal point is that there is always a prior probability for every claim—even if it’s 50%, such as when we have no prior information bearing on a claim’s probability (see “If You Learn Nothing Else“). And this means, very often, the burden of evidence has already been met. Even before any debate is engaged. And it is the one who claims to be able to overthrow that existing status quo who then bears “the burden of evidence.” Because the denier has already met their burden. So overcoming that requires taking up the burden anew.

I’ve already made this point repeatedly about the historicity of Jesus, for example. Contrary to what many overly enthusiastic atheists will say on the internet, historians who affirm Jesus existed do not bear the burden of evidence. Because the centuries-long academic consensus that Jesus existed entails having met a prima facie burden of evidence. In other words, the background evidence establishing that consensus already meets a requisite burden of evidence for the consensus position. There is a status quo, and it is (at least ostensibly) based on evidence. If someone wishes to challenge that consensus, then the burden of evidence lays now on that challenger to prove the consensus is in error; that the evidence that formed it either does not exist, hasn’t been soundly interpreted, or has other explanations that are as good or better; or to present evidence against that consensus, that the consensus ignored or overlooked.

And so on. In any event, a burden must be met. Because the background information already constitutes meeting a burden of evidence for the affirmative. Therefore the negative now bears the burden of evidence. Hence I bore the burden of evidence of proving doubt was warranted, and met that burden under peer review in On the Historicity of Jesus (which now those who wish to maintain the consensus should have to refute before continuing to affirm the consensus position). Even if we wish to argue that the background evidence the previous consensus was based on has been misused to generate unwarranted confidence, we bear the burden of showing that. Even a claim that the consensus is fraudulent (that it wasn’t based on evidence and thus hasn’t met a burden of evidence) requires meeting a burden of evidence. Because it is still a positive claim about the evidence. (See my articles On Evaluating Arguments from Consensus and Arguing Jesus Didn’t Exist Should Not Be a Strategy, and my discussion of evidential burden in Proving History, pp. 29-30 [“Axiom Six”].)

The same holds for atheism. Atheism has already met a basic prima facie burden of evidence. Background information already entails gods have extremely low prior probabilities. And has long done so, for decades if not centuries now. Indeed this is true for anything supernatural at all. (See my articles Defining the Supernatural and Defining the Supernatural vs. Logical Positivism, but also my extensive discussion in Sense and Goodness without God, Part IV, “What There Isn’t,” particularly section IV.1.1.7 on the burden of evidence; and my more formal discussion of evidential burdens in Proving History, index “smell test”).

It is therefore incumbent now on anyone who wishes to still affirm gods exist to bear the burden of evidence proving it. It is invalid to say atheists still bear the same burden now. Because they’ve already met that burden. The status quo is: no gods have been found, and vast amounts of background evidence leave all gods with extremely low priors. Therefore anyone who wishes to up those priors, bears the burden of presenting data that does that. Absent which, we can go on doubting gods every bit as much as we doubt magic, faeries, gremlins, or psionics. No further evidence need be collected to justify that.

The Burden of Confusion

The fallacy of ignoring priors will sometimes lead to people talking past each other: one side overlooking that their priors are already based on evidence; the other side overlooking that there even are evidence-based priors at all. As a result, one side will argue that just saying atheists bear the burden of proof helps theists to deploy the fallacy of burden shifting, while another side will argue that even atheists have to be able to give reasons for why they are an atheist; and each will think they are arguing against each other, when in fact both are correct.

For example, Steve McRae has been debating several folks on this for a while, including most relevantly on an episode of DE/Converted hosted by Arael Avenu. McRae unfortunately adopts (or occasionally slips into) a dysfunctional epistemology (in which the only possible belief-states are “100% true” or “100% false,” which frequently contradicts psychological fact and the requirements of logical utility), but that’s a debate for another time. In respect to burden of evidence, McRae focuses on the analytic fact that all beliefs (I would say all epistemic probabilities) require evidence and therefore every assertion bears a burden of evidence in a fundamental sense—which simply means: we must all have reasons, and good reasons, for whatever position we take, positive or negative, if our position is to be in any sense valid. Which is correct. But McRae then obsesses on this to the point of overlooking that this being analytically true is moot when vast amounts of background evidence have already met that required burden. He is stuck on the fallacy of ignoring priors. (His occasional slip into a dysfunctional and unscientific non-probabilistic epistemology may be responsible for this.)

McCrae needs to acknowledge that a low prior probability for God is already entailed by our background knowledge, thus settling the burden now on theists, because atheists have already met theirs; while his opponents need to acknowledge that we should be able, when called upon, to give an accounting of this background knowledge, by which we already reached a conclusion of low probability for the existence of any gods. That’s what McRae means by meeting a burden of evidence: simply explaining what already convinced us the probability is low, when someone asks us to. But it still follows that anyone who wants to change that probability, now bears the burden of evidence. Atheists no longer do.

McRae’s opponents are also trying to correctly point out that if someone has not met a burden of evidence requisite to believe a claim, then no further burden of evidence need be met to reject it. As Christopher Hitchens famously put it, “That which can be asserted without evidence can be dismissed without evidence.” In probabilistic terms, if you have not presented any evidence your claim is epistemically probable, I am by that fact alone warranted in concluding it is, so far as I know, not epistemically probable. Because were I warranted in thinking that, by definition it would be upon seeing evidence warranting it. Ergo, if I have no evidence, I have no warrant.

An epistemic probability is by definition P(h|e.b), which symbolically translates as “the probability of h given e.b,” where h is the claim (which can be positive or negative, an assertion or a denial) and e.b is the combination of e and b (as I mentioned before), the sum total of all information (all evidence) available to you. P(h), the probability we would be right to assert h is true, follows necessarily from what is in e.b.

So if there is nothing in e.b. that makes P(h|e.b) high, then by definition we have no warrant to believe that P(h|e.b) is high. We need meet no other burden to warrant that conclusion. The absence of evidence alone meets the requisite burden. Conversely however, we cannot simply assert P(h|e.b) is low, either, if nothing in e.b entails it is low. Which is McRae’s point. And he, like his opponents, is also right.

Thus, to assert the probability of h is low, does indeed require meeting a burden of evidence, in the sense that e.b must contain sufficient information to entail P(h|e.b) is low. But if there is nothing in e.b that makes it higher than 50% and nothing that makes it lower than 50%, then by definition the epistemic probability of h is 50%. Which means, so far as we know (“given what we know”), we would be as likely to be right as wrong if we asserted h was true.

This is why the claim “if you can’t refute it, it must be true” is a fallacy. It does not follow that the lack of evidence in e.b that P(h|e.b) is low entails that P(h|e.b) is high. If there is no evidence in e.b for either, then by definition P(h|e.b) is neither high nor low. It is therefore 50/50, which means, it is as likely to be true as false—so far as we know. New information could change that. But that’s what meeting a “burden of evidence” refers to: presenting evidence for P(h|e.b) being anything other than 50%. Atheists can do this from the contents of b alone. P(God|b), the prior probability of God, is already so low given b, that theists need to put some evidence in e that would change that. They are therefore the only ones the burden of evidence now falls upon. Because we’ve already met out epistemic requirements.

You’re Both Making Assertions about the Evidence

It is also the case that every denial entails an assertion, not merely as to a converse probability, but also as to a basic metaphysical fact about the contents of the world: that the evidence a claimant asserts proves their claim, either doesn’t exist, isn’t as described, entails something else, or was caused by something else. All of which are positive assertions about the evidence (and thus about what does and doesn’t exist).

The difference however is that one can validly say, for example, “any of a hundred other different things more likely caused that evidence” as a positive assertion that entails the claim is improbable and thus to be doubted or disbelieved. Which is a much more diffuse claim than “this evidence was caused by what I claim it was caused by.” This is what often leads to frustration or confusion about the burden of evidence. Denying a claim does not require as focused an assertion as affirming a claim. You needn’t “pick an alternative explanation” to be right; you can admit it could be any of many. But that should not disguise the fact that this is still making an assertion.

Hence to deny an assertion about what caused the evidence in question, you need merely affirm that there can be many alternative explanations of that same evidence, and that it is more likely one of them caused the evidence than the explanation being touted. Which does not, incidentally, require that any single one of those explanations is more likely than the explanation being touted. Although they may well be, it can also be the case that the sum of all their probabilities is the greater, even when no singular one of them is. As I noted once before using the example of the cause of Alexander the Great’s death:

in his critique of [atheist critic Michael] Martin, [Christian apologist Stephen] Davis suggests that even if “the probability of the falsity of [a hypothesis] H is .6,” i.e. 60%, it would still be rational to believe H if each of the only four other possibilities has a mere .15 or 15% probability of being true. This is unsound reasoning. In the scenario he describes, there would be a 60% chance that some one of the other explanations is true (which he labels A, B, C and D), so it would not be rational to believe H. What would be rational is to conclude that you don’t know which explanation is true.

For example, if Alexander died and the only options available were all natural causes except H, which was ‘murder’, then there would be a 60% chance that Alexander died of natural causes, and therefore it would not be rational to believe he was murdered. Though it would make sense in a gambling scenario to bet on H, that would only be the case if you had to bet, or could afford to lose. But history is not gambling. If you get to bet your life on A, B, C, D, or H, or not bet anything at all, in Davis’ scenario the rational choice would be to refrain from betting, since no matter which bet you placed, the odds would always favor your death. In such a case it would never be rational to say “I believe H will be a winning bet” even if it’s the best bet on the table. …

As far as sound historical argument goes, it would never be rational to say “I believe H is true” when you know H more probably than not is false.

Often these many alternatives are well understood but not stated. We all well know how evidence gets fabricated, distorted, misinterpreted, or caused by other things. So merely the fact that the alternatives are not stated, does not entail they are not being affirmed. To the contrary, affirming that the evidence does not entail the conclusion being touted, logically entails affirming that that evidence had some other cause (or does not exist, or some other positive assertion about it). Therefore, all negation is assertion. All doubt and disbelief entails a positive claim. It just might not be as reductive as affirming a specific description and explanation of the evidence; but it is still affirming that some description and explanation of the evidence exists that does not entail the conclusion being touted. Otherwise your denial could not be logically valid.

Atheism Overlapping Agnosticism

This also corrects common misunderstandings about the differences between atheists and agnostics.

What many people mean by “atheist” in common discourse is simply someone who is not a theist, which means, someone who does not believe any gods exist. Which only requires they conclude no god has a probability of existing of at least 50% or above. That means anyone who puts P(AnyGods) below 0.50 is an atheist in this sense. Which in turn means anyone who is an atheist in this sense is declaring that P(AnyGods) is below 0.50.

That’s how many people use and understand the words “atheist” and “atheism” in practice. But even this broadest of definitions still means atheists are making a positive claim: that P(God) < 0.50. That is a belief. Not merely the absence of a belief—which rarely exists in this case, since hardly anyone has never heard about or thought of gods nor been asked to consider whether gods exist. When it comes to considered matters, all “absences of belief” are actually positive beliefs in the low probability of that which is disbelieved. But then, that’s just colloquially what people really mean when they say they lack belief in a thing, and hence what they usually mean by “a mere absense of belief”: that they believe the thing in question has a low (or at least not appreciably high) probability of existing (or of being true, or whatever is being asserted of it). Only people who have never heard of gods nor ever thought of any actually lack all belief in the matter.

What we most commonly mean by “agnosticism” in colloquial discourse ends up being really just a probability conclusion for the doubted object (whether God, a historical Jesus, or that Trump hired prostitutes to urinate on a Moscow bed) that is too close to 50/50 to declare definite knowledge either way. One can be agnostic but lean toward belief, and thus be an agnostic theist, e.g. someone who concludes P(God) = 0.60 (or 60%). One can be agnostic but lean toward nonbelief, and thus be an agnostic atheist, e.g. someone who concludes P(God) = 0.20 (or 20%). One can be an agnostic and lean neither way and thus be completely undecided, which means someone who concludes P(God) = 0.50 (or 50%). Which includes everyone who concludes this value (of 0.50) lies within their margin of error, e.g. those who conclude P(God) = some value unknown to them except that it almost certainly lies, for all they know, between 0.35 and 0.55.

The old, original and formal definition of agnosticism (that knowledge regarding God’s existence is presently or even logically impossible) is almost entirely unknown to the general public and thus is almost never what anyone in regular discourse actually means by the term. If you want to be understood in common discourse with regular people, you can’t be using words in connotations they are unlikely to know. But even if you are a linguistic imperialist kicking against the goad of the inevitable march of history in transforming linguistic conventions, agnosticism in that formal sense is still just a subset of agnosticism in the more common, informal sense. So arguing over its definition doesn’t really get you anywhere anyway and is generally a waste of time.

You must unpack all belief states (positive and negative, believing and disbelieving, asserting and doubting) as simply an assertion of a probability. Which usually means in practice some range of probabilities, reflecting one’s confidence interval: that the probability must lie between A and B—when the available information entails you can’t know where between, but you know to a very high probability it is not outside that interval. And you must unpack all assertions about belief in just that way, because that’s what all belief-states logically entail—and there is no way to escape this (no matter how big a doofus you want to try to be).

Uncertainty, is simply an assertion about a probability. Doubt, is simply an assertion about a probability. Lack of knowledge, is simply an assertion about a probability. Disbelief, is simply an assertion about a probability. Denial, is simply an assertion about a probability. At most, what distinctions one might intend with such words, are merely of degree. “Doubt” often implies you are asserting a low but not very low probability, whereas “disbelief” often implies you are asserting a very low probability, and “denial” often implies you are asserting a much lower probability than even that. And so on. Likewise for belief, which also exists in varying degrees of certainty, and we may have various words we sometimes use for them. But whether we have a word for each distinction or not, every belief-state corresponds to an asserted range of probability for some claim. So every belief-state, whether of denial, doubt, belief, or certainty, entails a positive assertion, a claim.

A popular trend has also arisen that narrows the intended meaning of “atheism” in distinction to “agnosticism.” In this connotation they are semantically distinct in that, even colloquially, “agnosticism” often refers to the positive belief that the probability of a god is in some middling range, whereas “atheism” often refers to the positive belief that the probability of a god is definitely quite low (and thus not in a middling range).

Typically, agnosticism is used to refer to a type of atheism only in the broadest sense: those who assign a low probability to God; but now “agnostic” often means, more narrowly, those who assign a low probability, but not so low as to be confident enough to declare themselves atheists. By contrast, “atheist” is often understood to indicate one has concluded the probability of God is much lower than “agnostics” declare.

But it is still semantically possible for an agnostic to be a theist, if for example they assign a higher probability to God, but not so high as to be confident enough to declare certainty of that God’s existence. They are still typically classified as theists. They do, after all, have some belief there is a god. The converse is thus just as valid, that agnostic doubters of god are technically still atheists. Just not “hard” or “strong” atheists as the parlance would have it. But “soft” or “weak” atheists.

Of course it’s really analytically worse than that for anyone who wants to act the linguistic imperialist and try to deny what I’ve just said is a correct application of these words in real world discourse and practice. Because analytically, everyone (atheists, agnostics, and theists) are both atheists and agnostics simultaneously. Because there are always some gods everyone is an atheist regarding, and always some gods everyone is a formal agnostic regarding (see my old discussion in Atheist or Agnostic demonstrating this point).

The only thing that separates any of these folks at all is (a) whether they believe in at least one god (which means, that they conclude P(God) for at least one god is > 0.50) or, in the absence of that, (b) whether they believe at least one god has more than a marginal probability of existing (which means, that they conclude P(God) is for no gods > 0.50 but for at least one god > 0.10 or even higher). The latter group is what common practice has evolved to understand by and label with the word “agnostic.” While “atheist” has evolved in linguistic convention to more typically mean someone who meets neither condition (a) nor condition (b). But lexically, and still often enough, “atheism” as a word in many contexts includes those in condition (b), in parallel to that same word’s exclusion of those in condition (a).

So there is no point in arguing what these words mean, like whether “atheist” includes or excludes persons in condition (b), or whether “agnostic” refers only to a subset of those persons or all of them. Each does both, depending on how the word is being used, depending on context and the communicator’s intent. Same as most words.

Gods Are Already Improbable

As I already said, the prior probability for any god, as for anything supernatural, is already well established to be low. Background evidence has already met the burden of establishing that. Which is why theists bear the burden of evidence of disproving that now. And why atheism is now the only logically valid default position. It is not the default because it is the absence of a claim. It is the default because the historical parade of evidence has established it to be.

For example, when we doubt gods exist because we doubt disembodied minds even could exist (see, for instance, The Argument from Mind-Brain Dysteleology, The God Impossible, and The Argument from Specified Complexity against the Supernatural), the premise (that disembodied minds probably can’t exist) is based on extensive past knowledge in the matter. And the conclusion validly follows from the premise (if probably there are no disembodied minds, then probably there are no gods).

So the burden of evidence has already been met. Before we even show up to the debate. It’s not as if we just popped into existence, devoid of all background knowledge. If we had, we would not doubt the possibility of disembodied minds; we’d conclude instead that they had a 50/50 chance of being possible until we get more information, and thus would need to start learning what we can that pertains to the question—like getting up to speed on the basics of neuroscience; indeed, even on the fact that there was such a thing as neuroscience. But humans are rarely in such a condition. We don’t pop magically into existence in the middle of a debate devoid of any background knowledge. We come to any debate already packed with vast quantities of background knowledge.

The same goes for every other reason we doubt gods exist: it’s based on extensive background knowledge. The burden has already been met. Therefore theists bear the burden of evidence now. Not because they are the ones making a claim; for the atheists are making the very same claim, just with a different P. It’s thus not about who is making a claim. Rather, theists bear the burden of evidence because atheists have already met theirs. The prior probability of gods is low. If the theist wants to change that, to increase that P, they have to present some e (some evidence) that will do that. In the absence of which, the current default remains: the low prior, previously established by already-examined background data.

This is why Don McIntosh is simply wrong, when he attempts to rebut my old article about Proving a Negative. God’s existence has already been amply refuted; background knowledge already puts his prior probability in the dumpster (see, for example, my summaries in Bayesian Counter-Apologetics, my book Why I Am Not a Christian, and Part IV of my book Sense and Goodness without God; you should also note why turning God into a Cartesian Demon is never a valid rebuttal).

Indeed, my own article on Proving a Negative that McIntosh claims to be rebutting, already refutes his contrary assertions—he simply ignores what it actually says, and makes up a bunch of excuses for his god’s undetectability that I already demonstrated to be improbable. When we go back to the actual data, we don’t get the result he wants. So he has to ignore data to get it. But that data remains, and collectively it has already established gods have low priors. Thus anyone who wishes to restore that probability to anything respectable now bears the burden of presenting the needed data. And to do that, speculations aren’t data; no excuse for God is credible, if you have no evidence that excuse is probable. And McIntosh has no evidence any of his excuses are at all probable. Merely being possible is not enough (see my discussion of this fallacy in Proving History, pp. 26-29).

McIntosh is also wrong about how evidence works in more basic ways. For instance, he is wrong to claim we cannot justify concluding the existence of Vogons has a very low probability. Contrary to what he says, we do not have to check every corner of the universe, to estimate the highest credible probability of a coincidence between a ridiculous authored fiction and reality. Indeed, Vogons are probably even logically impossible, for the same reasons I show for Star Wars in The God Impossible. But even if they were probabilistically certain to exist, by virtue of the universe being infinite and infinitely configured (though the former is not certain, and the latter is not actually entailed by the former, and yet both are required for real Vogons to have a high probability), we can still show the probability that they exist anywhere near us is as near to zero as anyone cares that it be. Again, without “checking every corner of the universe.” And that’s just on background knowledge alone.

This is why it’s important to always convert every discussion of what does or doesn’t exist, what is or isn’t true, into a debate over a probability. Not of possibility. Not of “true or false.” But how probable. Because then we must ask why it’s that probability and not some other. McIntosh for instance would have immediately started to question his own thinking the moment he was forced to admit his own assertions entailed he believed there was a 50% chance Vogons existed. Exploring why that’s absurd, might hopefully have led him to a more coherent epistemology.

Conclusion

Wikipedia provides distinct entries on the Burden of Proof for philosophy and law. We are here concerned with philosophy. Which actually means, everyday life.

Reasonable discussion of debating the burden of proof can be found at Logically Fallacious and RationalWiki. But it always comes down to this: the “burden of proof” for any position semantically means “information that increases the probability of that position,” and usually not merely that, but “increases it above 50%,” or in fact higher still. Because epistemic probabilities only marginally above 50% still entail considerable doubt and uncertainty. After all, we don’t usually touch things that have only a 70% chance of not electrocuting us, so “70% safe” is not that great a probability. So when our probabilities are not high in either direction (e.g. not above 0.90 or even 0.999 nor below 0.10 or even 0.001), what we are quantifying is doubt, uncertainty, lack of knowledge—agnosticism. Hence, agnosticism, uncertainty, doubt, are still all positive assertions in their own right. They simply refer to probability assignments that are neither very high nor very low. But they still always assert something about the probability of a thing, such as (at the very least) that it is most likely neither very high nor very low.

And this means you can never simply argue the burden of evidence lies with “the one making a claim,” because everyone is making a claim. Disbelievers in any God, even colloquial agnostics about gods, are making as much of a claim about the existence of God as believers are. They are simply disputing what P(God) is. But every value for P(God) is as much an assertion as any other, and just as requiring of positive belief. Even a genuine “I don’t know” condition is simply the positive assertion that P(God) is 0.50 (or that one’s confidence interval for P(God) includes 0.50). Which requires as much justification as any other declared P.

However, this does not mean you can always claim “everyone always bears a burden of evidence.” Because very often, particularly in matters that have been widely discussed, investigated, and explored for ages, the expected (even if minimal) burden has already been met. It is met in our background knowledge. And for God the priors are thus already established to be low. And that means anyone who wants to raise that prior, is the one who bears the burden of evidence to do so. Atheists have already met their burden. It’s on theists now to overcome it. Or admit they can’t.

49 comments

    1. quote : “Wikipedia provides distinct entries on the Burden of Proof for philosophy and law. We are here concerned with philosophy. Which actually means, everyday life.”
      Granted, it only shows up in the 49th paragraph (under Conclusion), but still, READ THE ARTICLE !!

      Reply
      1. I cannot read the whole article when a layman is writing about the burden of proof in any form without a legal education. You need to learn Torah before you talk on such issues. You cannot discuss any type of burden of proof without Torah. Remember Jesus is Abibus. Go and learn

        Reply
    1. I have two questions which are unrelated, so I’ll make 2 separate comments.

      The first concerns the core of the matter. You say the background knowledge shows the low P the existence of any god. But a theist will say : “But He (or She or It) created the world, that’s OUR background knowledge. So your b is negated by our b. So the burdon of proof is still on the atheist making the claim, and NOT met !”

      Isn’t the tactic of dr. Boghossian (unfortunately discredited due to a foolish endeavour) of challenging faith as a totally unreliable epistemological tool more fruitful ?

      Reply
      1. Of course. Because that statement is literally void of logic. To say “God created the world” is in b is literally, factually false. Because that is the hypothesis, h, not the content of the established facts, e or b. To confuse the two is a fundamental violation of logic. And only logically valid statements can meet any burden of evidence.

        If it were the case that we had over the last century extensively, scientifically confirmed the world was designed by an intelligence powerful enough to design and create universes, then that would be in b. And then atheists would bear the burden of evidence of disproving it (and would probably fail). But that’s not the world we live in. We find ourselves in exactly the opposite condition: where b conspicuously has failed to acquire that outcome, containing no such results or information, but instead has accumulated facts entirely the contrary.

        I discuss this error in Proving History with respect to the empty tomb (see the index), where apologists invalidly put “an empty tomb was discovered” in b or e, when in fact what’s in b or e is “late unsourced stories arose of an empty tomb being discovered.” That these stories were caused by an actual event is part of the hypothesis, h. Not a part of the evidence, e. Certainly not a part of the background knowledge, b.

        (Notably, this is in part why Gary Habermas has dropped the empty tomb from his minimal facts apologetic now.)

        Thus, burden of evidence is only created by valid inferences from e.b to P(h|e.b). Not invalid inferences, or having incomplete or fraudulent contents in e.b. The low prior for God is a valid inference from the actual contents of b. The Creationist asserting what you suggest is not making a logically valid statement at all about the contents of b or its effect on P. Thus their statement has zero evidential value. It therefore cannot raise the prior probability of God.

        To bring this back to one of the points I make in the article: arguments like that are why atheists bear no burden of evidence in this debate anymore. A theist who makes logically invalid and factually false statements is giving us zero reason to believe their claim. And we need no more reason to disbelieve their claim than that. Per the Hitchens rule.

        Reply
        1. Thank you dr. Carrier. What I take away from your answer is that it is important before the discussion even starts to make clear distinctions what is the content of h, b and e, so that no circular argument creeps in.
          Thanks for clarifying !
          Geez, logic is hard !

  1. When is such probability 100%?

    And doesn’t knowing something nesasrily mean knowing it 100% otherwise you coud be rong about everything you claim t know (including this statement) hence u’v givn up knowledge.

    Thus spake the presuppositionalist.

    Reply
    1. You actually could be wrong about (nearly) everything you know. It’s just highly improbable. The only time a belief can be 1 or 0 probability (other than approximately) is when it is literally impossible to be wrong about it, which means only immediate uninterpreted present experience (and even then statements about it can still be wrong, e.g. if you mis-describe it). See my articles Epistemological End Game and How Not to Be a Doofus for more.

      And indeed that’s one of the basic fallacies presuppositionalists commit: ignoring that it’s all about probability, not simply true or false or merely what’s possible.

      Reply
  2. Great piece. I had a question with regard to Bayesian epistemology. I was taking a Philosophy of Science class in which the topic of Bayesianism was discussed. Many students objected to the notion of “degrees of belief” that they inferred from the calculated posterior probability. One person complained, “how can I have a 70% belief that a proposition is true?”

    I think I understand that what you would say is that an honestly derived 70% posterior probability is just an assessment that such a belief is correct 70% of the time. So in a sense this is a physical probability about epistemic accuracy (if that makes sense). I was wondering if one could also frame this in terms of warrant. So it might be true that a subjective state of a 70% strong belief is indeed nonsensical, as my class mate objected.

    But I was thinking that you could say the 70% posterior probability about a degree of warrant based on the inductions of the Bayesian calculation. The notion of warrant just basically pointing to what you said about the frequency that such a belief accurately corresponds to reality. Does the notion of ‘warrant’ work in this way when thinking about Bayesian posterior probabilities. Or am I confusing things by thinking of it in this way. Thanks.

    [I haven’t contributed for a while, so if you cannot post this I understand.]

    Reply
    1. Yes. And I do discuss warrant in Proving History.

      But more pertinent to your question here is that we don’t have a “70% belief.” We have a nearly 100% belief that the probability of “what we believe” being true is 70%. This is covered late in chapter 6 of Proving History, but also check the index there for “confidence level” for more. The “degree of belief” is the confidence interval (e.g. “65% to 75%”). In which you should aim to be nearly certain of (e.g. a confidence “level” of, say, 99.9%; what level of confidence you need depends on your risk assessment, and the confidence interval changes with confidence level, in a strict mathematical relationship).

      Analogously to poker: you can be totally confident (“nearly 100% certain”) that you have a 33% chance of winning the hand. Thus your “degree of belief” that you will win that hand clocks in at 0.33, which means you technically believe you won’t win the hand (the odds favor your losing), but are not certain enough of that to (let’s say) fold your hand (i.e. it might still be high enough to bet on that hand, which is not a question of probability but risk assessment, i.e. can you afford to lose, if the potential for return is that high?).

      A simpler way to put it is, that if you have a degree of belief in x of 70%, then you are saying there is a 30% chance you will be wrong if you trust or assert x. Which is high enough to worry about, but maybe not high enough to act as though x is false (the latter depends, again, on risk assessment: “am I willing to risk being wrong, when the odds of my being wrong are that high”).

      Psychologically the human brain is not correctly wired for this. We do have neurological confidence levels (degrees of certainty and uncertainty, that roughly map onto probabilities of being right or wrong), but they are on an inverted bell curve, the exact opposite of reality. So we have a strong tendency to “over estimate” the odds of being right or wrong, and are uncomfortable with uncertainty, probabilities in the middle. This is cognitive dissonance: uncertainty makes us uncomfortable, so we are continually motivated to push the probabilities to the outer edges (to “make a decision” and be done with it), i.e. to just settle on a statement being true or false, an error that causes most human misery and mistakes—but was crudely useful for the dumb apes we used to be, since indecision can be more frequently fatal than a wrong decision, but that replaces being right with being alive, not a good trade if your goal is knowing what’s true (now that we have social systems and technologies that can keep us alive in a state of indecision in many more cases).

      Reply
      1. Wow… phenomena like the Trump vote, the Brexit, the new nationalism in Europe makes more sense seen it in this light… Of course, there’s much more to every phenomenon just mentioned, but still, the overall tendency towards certainty, being true or false, against uncertainty is striking !

        Reply
    2. And yes, you are right that semantically a posterior probability, using high confidence estimates for the inputs (priors and likelihoods), for any x of 0.70 literally indicates that given the kind of information you have at that time, to a near 100% probability, x will turn out to be true 7 out of 10 times.

      Hence degree of belief is simply the converse of frequency of error.

      Reply
  3. 2nd question, a bit on the side. I still struggle with the distinction “physical probability” vs. “epistemic probability”. As far as I understand it, “physical probability” is the actual probability of an event happening. But how can we know that it is “actual happening” ? I mean, everything we see, hear or measure has to be processed by our brain, hasn’t it ?That means that there is ONLY epistemic probability. The claim that there is a “physical probability” has a very probability indeed, but it’s only the function of an epistemic probability !

    Reply
    1. Reply to my own 2nd comment : “physical probability” DOES exist independently and objectively. If all mankind perishes tomorrow, the probabilty that the earth rotates 360° on its axis (as it has always done), thus making “the sun coming up”, is still very close to 100%. Nature doesn’t care if were are there to perceive it.

      On the other hand, literally ages ago, the epistemic probability of the earth rotating on its axis was next to 0% – because of lack of evidence.

      Reply
    2. Epistemic probabilities actually reduce to estimates of physical probabilities, which is why the more we know about the pertinent physical probability, the more our epistemic probability converges on that physical probability.

      This is most obvious in well structured games of chance, like poker. Where you know the physical probability of drawing a king off a freshly shuffled deck (to a near 100% certainty; “near,” because of the possibilities of a rigged or incomplete deck, etc.). And so your epistemic probability that the card you draw will be a king will essentially be identical to that physical probability (with just some margin of error to account for mistaken assumptions, like that the deck isn’t missing cards or wasn’t honestly shuffled).

      I discuss this in detail, with many real world examples, in the latter half of chapter 6 of Proving History.

      But epistemically you are right, we can only ever know estimates of the actual probabilities; we are never in a condition of 100% certainty that what our information indicates a physical probability to be is correct (e.g. that the information is complete, entirely accurate, etc.). We can only be more and less certain of that. The entire science of statistics is all about that.

      Reply
  4. What if I say it is 100% certain that Abraham Lincoln was a President of the United States in the 19’th century? That would seem correct unless you factor in radical skepticism (the brain in the vat hypothesis) into the equation. What am I missing here?

    Reply
    1. Correct. Nothing like that is literally 100% certain, precisely because there are so many ways it can be wrong, even if only bizarre ones. But it is effectively 100%, in the sense that the probability of it being false is vanishingly small and thus well below any margin that usually matters for making decisions or declaring confidence. And it is that way precisely because the only logically possible ways it can be wrong (given the information currently available to you) are all bizarre. Which is just another word for extremely improbable.

      Nevertheless, you can imagine evidence that, were it to come to light, would convince you indeed it was false after all. Which entails the probability of it being false cannot be literally zero. As the mathematical consequence of an absolute zero probability is that even infinite evidence could never refute it or change your mind about it, which would be irrational. Nevertheless, that such evidence will arrive is so improbable you can safely ignore the possibility. It’s even more irrelevant to your life than the possibility of your being abducted by aliens tomorrow.

      I discuss this at length in the article I linked to in this piece on How Not to Be a Doofus. Also pertinent is my discussion of Cartesian Demons I also linked to, and why we can rule them out even though they technically have a nonzero probability.

      Reply
  5. Hello Dr. Carrier, I just happened to watch a video titled “Hangout : Discussing Dr. Richard Carrier’s blog post on my argument” by Steve McRae.

    Would you consider being on his channel or on Nonsequitor again to discuss this issue of “burden of proof”?

    Thanks,
    James Smith

    Reply
  6. If some random fool makes some random claim, and chooses not to point at any evidence, then their choice not to point at evidence is evidence. If they had evidence they would offer it, since their purpose in making the claim is to be believed, and a claim accompanied with evidence is more believable. So they probably don’t have evidence. At this point, as the person encountering the claim, I have a choice between saying nothing, researching their probably-random-lie, or saying “the burden of proof is on the one making the claim” or something functionally equivalent.

    Researching their probable-lie is bad from a time management point of view. Making up plausible garbage is much less time consuming than investigating it.

    Saying nothing is bad because it doesn’t give them the opportunity to state their evidence, and it doesn’t give any signal to the usual horde of gullible listeners that the bald assertion is probably false.

    So I think “the burden of proof is on the one making the claim” is a valid criticism when the person making the claim hasn’t offered any evidence at all. Maybe it is misleading and I should say “Try again with something better than a bald assertion” instead.

    Reply
    1. Indeed. That’s exactly what I said (“if someone has not met a burden of evidence requisite to believe a claim, then no further burden of evidence need be met to reject it” etc.).

      In practice, though, usually people don’t do that, but offer up a plate of what they claim is evidence but isn’t. The effect is nevertheless the same. What they are then asking though is “why is my evidence not good enough?”

      And indeed, the matter of “why bother” is separate from the formal reality of the engagement. You are under no obligation to correct a fool (unless the circumstances entail you must, e.g. to save lives), although society works better if some of its personnel dedicate themselves to doing that, then you can benefit from our work, the advantage of division of labor that all civilization is built on. In a sense that’s a role I play, in many issues I have competence in. It’s in part what my Patrons fund me for.

      But apart from that, if someone says x is true and I have the burden of disproving it, but in fact I already have (which is why I don’t believe it) or they haven’t presented any burden to rebut (leaving x unproved), they are wrong. Their failure to competently defend x already meets the burden against believing x; the more so if I’ve already researched and disproved x. This may not change their mind (perhaps they are ignorant of all the evidence and reasons x is false), but they cannot have any rational expectation of changing ours in such a condition. We already know x is false, or not shown to be probable even by their own argument. So for us, the conversation is over. Unless we want to engage further to change their mind.

      Which is often a waste of time, because usually such people are delusional and thus impervious to evidence and reason—that just isn’t always the case, so engagement is still sometimes worthwhile. The more so to impact an audience watching the debate, as you will be persuading fence sitters and inoculating potential victims against forming the same delusion. Which is the real value of public discourse.

      Reply
  7. Richard carrier is not a lawyer but feels very comfortable writing as if he is one on the burden of proof. How can he talk on religion and does not know Talmud and has no understanding of hermenuetics. You cannot talk about religion and dont know what is the curse of Canaan. Richard learn what is the curse of Canaan before you talk about God. Canaan curse was a cubit. Go and learn.

    Reply
    1. This article isn’t about the legal definition of burden of evidence. It outright says it isn’t. Learn how to read. And never comment on articles you haven’t read.

      The rest of your rant is crazy tinfoil hat. Go see a therapist.

      Reply
  8. Frederic Christie January 25, 2019, 7:19 pm

    It gets more complicated when we take into account that the “burden of proof” is also sometimes a matter of context.

    Many atheists, like The Messianic Manic, don’t take the stand that they are utterly confident God doesn’t exist and that others should change their mind. Rather, their claim is “I haven’t encountered any description of God that is coherent enough to consider, so leave me alone”. To them, as with many of us, the burden of proof is in effect a social contract. “Fine, I will listen to you for a second”, we say, “but you better get to the good stuff or I won’t change my mind”.

    In many cases, we’re discussing changes to policy and behavior. I’ll put a much lower premium on being skeptical about the existence of Vogons in theory, and be willing to tentatively accept that theory on a lower probability, than the idea that we should begin to prepare for a Vogon invasion. I’d say that I’d need to be more than 90% confident that the probability of the existence of Vogons is higher than 70% to believe in their mere existence, but to be swayed to put money into anti-Vogon defenses to protect us from being turned into a hyperspace bypass, I’d want to be >99.9% confident that the probability of Vogons invading was >99.99%.

    All of this means that the burden of proof can be used as a logically valid rhetorical strategy, [i]based on the discussion at hand[/i]. If we’re discussing gay marriage, for example, I can reasonably say “I don’t need to show that God doesn’t exist, or that God doesn’t dislike homosexuality, or that homosexuality is actually beneficial. I’m not the one asking for homosexuality to be forced onto people. My opponent wants it banned. It is his/her duty, therefore, to show that gay marriage is harmful, for whatever reason, to a high enough probability to justify the state discriminating against a group of people”. And that’s not just the case in law, either, but in any discussion that warrants important changes. A valid response to Pascal’s wager is that the costs of betting on God are actually non-trivial, since they require accepting certain propositions that dictate certain actions.

    Reply
    1. Right. And I covered that in the “no evidence” condition in the article, too.

      Although it is not valid to say “I haven’t ever encountered a coherent description of God.” That would have to be a literally false statement.

      Of course even if it were true, it’s still only true to a probability (i.e. there is always a nonzero probability one of those definitions was coherent and you mistook it as incoherent). But it’s obviously false. We ourselves can construct coherent definitions of God. And do so constantly. That’s how we disprove them. That theists then after the fact “re-tool” their God definition to evade those disproofs by retreating into the territory of incoherence is only an example of their presenting zero data to update our low priors for the existence of gods, and then falsely (whether dishonestly or fallaciously) claiming they gave us data that should update our priors.

      Reply
      1. Frederic Christie January 29, 2019, 8:34 pm

        I guess it depends on how strict we set our criteria for intelligibility and what we think of as “having heard”. Me, I have yet to encounter a description from any modern theist that isn’t some kind of “disembodied mind that does stuff through mechanisms we can’t explain” explanation. In fact, to the best of my knowledge, I can’t recall encountering a description of God that wasn’t internally contradictory. (And I don’t even mean the trinitarian “one is three and also one” problem, but more that God will slide back and forth effortlessly from vague creative force to very specific entity making us in His image on a dime). You’re obviously right that this came about as a result of endless goalpost shifting, but it leads folks like me and TMM to just rarely encounter theists who will lay down a definition that we find coherent.

        I do admit, though, that TMM will often say that he finds a disembodied mind incoherent or something similar, which to me seems less incoherent as just impossible. I can imagine Escher’s Waterfall painting in my mind just fine, and I can possibly even imagine some coherent well-defined laws of physics that would make that world consistent, but it just doesn’t cohere to our world. The same may apply to a disembodied mind. That having been said, I think TMM is arriving at the same point you often make about things like disembodied minds: they may be logically incoherent because they’re internally inconsistent. A disembodied mind asks us to imagine a mind without the stuff that does a mind, which is like trying to visualize a football game without any players.

        In other words, I think it’s been so long that anyone really consistently imagined actual magical humanoids on top of Olympus or something of the sort that it’s possible to say that for hundreds of years theists have as a result of all that goalpost shifting been presenting their jury-rigged God from the outset. In my experience, I’ve never encountered a God explanation without obvious ad hoc excuses built in from the outset. The fact that it gets worse when theists are pressed in my mind doesn’t change that.

        Reply
        1. One must distinguish incoherent from incomplete hypotheses.

          Almost all hypotheses are incomplete. We don’t, for example, really know why photons exist or behave as they do, what the mechanism is that realizes both. We have ideas now (e.g. string theory), but even they are incomplete (e.g. What are strings made of? Why do they behave that way? Why do they exist?), and remain unverified. The truth of all the physics we have verified of photos does not depend on having any of that worked out. Ditto, gods.

          So it is not incoherent to say a god exists who can do certain things, and not have worked out how he does those things. It only becomes incoherent when you positively affirm an explanation for how he does those things, that actually contradicts other parts of the model. Incoherence requires an actual contradiction. Not merely an incomplete hypothesis.

          Moreover, suspecting incoherence, does not establish incoherence. Thus, I do indeed suspect supernaturalism is incoherent (see The God Impossible). But neither I nor anyone else can demonstrate this is the case. So it remains epistemically possible a disembodied god is coherent. I therefore cannot dismiss it merely on my suspicion of its incoherence.

          I have not only found most people’s definitions of god are perfectly coherent, or not demonstrably incoherent, but I myself have constructed many such models, to argue charitably against the most plausible god hypotheses.

          But yes, I have also run into incoherent god models, but usually only as a result of maneuvering, i.e. the theist starts with (and usually maintains) a coherent definition, but as soon as we point out how that model contradicts observation, they “make up” excuses for that that permit them to escape the resulting cognitive dissonance. It is when they make up those excuses as a psychological defense, that they often end up producing an incoherent model (because they rarely stop to check the resulting, modified model for coherence). Because it is a defense mechanism, they will drop the incoherent addition as soon as its incoherence is demonstrated, and will retreat back to the original definition. Until you point out that still contradicts the evidence. Then they will try to “make up” a new excuse. And maybe this time it will at least be coherent; or maybe it will be incoherent in a new way. And so on.

          Which gets to another important distinction:

          One must also distinguish internally incoherent hypotheses, from an incoherence only generated by the combination of a hypothesis, evidence, and conclusion. That’s not the same thing. Coming up with a hypothesis that contradicts the evidence, is not coming up with an incoherent hypothesis. The incoherence is between the hypothesis and the evidence. Not within the hypothesis. And that incoherence only arises when an invalid probability is assigned to it. Because an elaborate enough hypothesis can explain away any evidence that contradicts it, i.e. we can make any hypothesis cohere with any body of evidence. The problem is that doing this generally produces an extremely low epistemic probability for the newly gerrymandered hypothesis. So it is only when someone asserts a high probability to it, that we are looking at an incoherent result. But the result is incoherent, not the hypothesis.

  9. Enjoyed this (and the doofus) article a lot – thank you! – and am still chuckling over the Vogons reference.

    And here I thought I had put my foot down rather firmly about only using Harry Potter analogies because, you know, of the multiple attestations… but… I may have been wrong about that. 😉

    Reply
  10. Update: As Steve McRae does occasionally espouse probabilistic epistemology, he just doesn’t use it in the linked video (but instead explicitly states a premise to the contrary in his arguments against his opponents), I have emended the relevant paragraph to account for the fact that he slips in and out of different epistemologies when making various arguments.

    Nevertheless, in his debate over burden of evidence, McRae explicitly does not use probabilistic epistemology but states a binary epistemology. He perhaps does not realize the logical significance of confusing ontological facts (which are binary) with epistemological realities (which are not) in discussions of epistemology.

    Hence the whole point of my article from line one: in epistemology, it’s all about probability. He who ignores that, errs. Before they’ve even started the discussion, they’ve gone off the rails. I then proceed to show how not framing a burden debate in probabilistic terms leads to talking past each other and getting nowhere.

    Reply
  11. Richard, I am a physician scientist. In my professional life, I recognize a 95% or greater probability as ‘true’. In other words, a p value > or equal to 0.05. If I was a physicist, I would recognize a p value of 0.000000000001 as needing to be met for ‘true’ (Eg, a finding in quantum physics or astrophysics). This means that for physics we need a 99.999999999999% probability to publish. I abhor the word belief and faith for their tendency to abuse by equivocation. Language of course is not meant for ‘gotchas’ simply by the fallacy of equivocation. So as you can imagine, statements such as “I don’t believe in evolution” are as rational as me saying to you “I think your urinary symptoms are related to your prolapsed uterus”.

    There are probably over one billion priors in support of the theory of evolution with zero pieces of data that disagree. And this is of course the power of the scientific process. If a piece of data is shown to be ‘the exception to the rule’, based upon the billions of priors, this new piece of data does not take down evolution but actually generates a new, off shoot hypothesis of evolution. And this is what the new scientist in the field seizes upon. Makes new guesses. Does more searching. Gets more evidence. And what happens, Eg, is that genetic drift is discovered. And it turns out it’s not reall an exception that proves the rule but a nuance that reshapes the rule. Hence, evolution just gets stronger. This is why, Eg, string theory really is misnamed. It’s really a hypothesis right now.

    Anyway, back to language. When I am determining my reality either in regards to the past or the present, using priors, claims, new data and Bayesian reasoning I either end up not knowing something (meaning I don’t reach a 95% or what ever probility I require) or I do know something. I believe nothing about the present or the past. I either know or don’t. As for the future, based upon all I know and don’t know, I can then make predictions as to the probility I will arrive at to either know or don’t know. This is belief to me. But, why use that word if we have a better one? And prediction is that better word. It is only for the future that I predict (believe) what I will or will not know. This is why time is important. And of course why and how we collect our evidence, word and build our hypotheses and build up our theories, is how our future predictions become so important. Because evidence is found out in the next ‘now’ which becomes the present and then becomes a part of our past which means a part of our building priors. And this is why faith (knowing things despite the evidence) is so destructive.

    The equivocation that occurs with faith is when the user switches to the meaning trust. And this is where our personhood, consciousness, and the depth of our empathy come into play. We have to trust each ither’s View of reality. Why? Bc we can’t do it all as individuals. But if you get 4 people who trust each other’s empathy, this means we have 4 brains that see how things ratio in the past and now and that means as a group they can predict the future well together. When these 4 people are John, paul, George and ringo you get free entropy. You get the correct geometry of information. You get a perfect 1111111 hamming code. You get the Beatles. But what if one of those people is trump? And what is a psychopath? It’s a mammal that has no functioning paralimbic system. You have a thinking reptile. What goes out the window? Empathy. Which means what? Howvthat person sees ratios between things. And what they see is not the issue but rather just themselves. Why? Bc they have no empathy which means they think everyone else is just making up shit too. Thus is why trump is the opposite of the Beatles in his world. He sold himself as the leader of the Beatles. He had the best people. See? The reason I’m laying this foundation is the thread of comments after the atheist experience show a few weeks back. This one person kept shouting everything down. ‘The problem of induction’. ‘Science has just as much faith as religion’. ‘Just bc gravity works today doesn’t mean it’s true. It’s not knowledge. Because you don’t know it will happen tomorrow. Read Hume’. https://freethoughtblogs.com/axp/2018/12/23/open-thread-for-episode-22-51-mr-atheist/#comments

    My point is: isn’t the whole point of what you’re saying in this post, in your books, and what I’m trying to say above is not only the answer to Oreoman1987 but exactly what the difference between religion and science are in determining the probabilities of reality. And this is why people with NPD like trump are evil: Bc they are wasting time and energy by deliberately making irrational connections that end up wasting said E=mc2.

    And when you do it on purpose it’s evil. When u do it by accident you are ill or delusional.

    But it’s the concepts of Robin Dunbar and what we see in such areas as complexity (Eg Geoffrey West and scaling), Shannon entropy vs algorithmic (kolmogrov) entropy that explain why all reasoning becomes quaternary (true and false positive, true and false negative) via Bayes once you get a system that is self referential. Prior to a self referential system, things are binary, polar, digital and determinable by simple deduction and reductionabsurdem. But once you get above enough Planck units such that curves, time, and matter emerge from the underlying energy (ie entropy) you develop a self referential system in which now goedel’s incompleteness ideas, the halting problem start to get involved. Then as u increase in scale you can be fooled by the idea of design. Confused indeed until you get all the way to the universe and to black holes. And isn’t it interesting that as we learn more about black holes it appears we see a circle happening. First going down from 4 to 1 dimension. Then white holes. Complexity. To another Big Bang.

    And that’s why Bayesian reasoning is so important. It works fine if you have a binary situation such as only true and not true. But once you get self referential and thus get false and not false added…you need to compensate for the faith based false positive (or false negative if you are considering the null hypothesis).

    Anyway…

    Reply
      1. Im exaggerating, of course. As a clinical scientist (eg, chemotherapy trials) and even in basic science immunology and oncology research, I had never encountered a need for a p value of <= 0.05 to determine significance between results. So, as I I’ve learned more and more about the sciences towards the bottom of the pyramid, because of accurary of measurements, I have read often that in these equations, the differences in results went as far as the 12th decimal place. That’s the full extend of why I picked that outrageous number.

        I am thinking along the lines of what this Blog from Scientific American is proposing:https://blogs.scientificamerican.com/observations/five-sigmawhats-that/

        Specifically this paragraph:
        “High-energy physics requires even lower p-values to announce evidence or discoveries. The threshold for “evidence of a particle,” corresponds to p=0.003, and the standard for “discovery” is p=0.0000003.”

        Reply
  12. First post and I loved OHJ. Interesting post on your point and as a quantitative social scientist it is good to see your pushing Bayesian thought forward. But I am wondering how your reliance on and insistence of the primacy of Bayes would stand in the face of the fact that frequentist approaches which ignores priors in simply testing the null hypothesis are still around with good reason. As Neyman told us in 1937, different theories of probability are used in different settings. A clinical trial, for instance, would not really want to use Bayes as it should be an experiment were the data has to point to the effectiveness of the drug. that’s way these studies often set alpha at p < .001 or .p < .01 as opposed to the conventional p < .05. I will concede that the spirit of Bayes is reflected in the observation that good experimental designs consider covariates. but I would take exception to the idea that all arguments underpinned by probability theory can be reduced to Bayesian logic.

    Reply
    1. Frequency statistics is fully subsumable within Bayesian epistemology. Just as Newtonian mechanics within Relativity. Indeed most formal uses of BT employ FS for its premises. And even those that don’t, are in some fashion approximating FS in the formulation of their premises.

      You can check Google Scholar for many articles on why frequency statistics is incomplete and problematic outside a Bayesian framework. And its unusable as an epistemology. It’s principal defect is that it never generates an epistemic posterior probability your hypothesis is true. Which is why it cannot function as an epistemology. It only ever generates a probability your data was not produced by chance. But the converse of that is not your hypothesis. And thinking it is, has led to numerous serious mistakes and failures in science over the last hundred years.

      Bayes’ Theorem is needed to get from “this data was probably not caused by chance” to “this hypothesis is probably true.” Frequency statistics can only get you the premise. Not the conclusion.

      Reply
      1. Thanks for your reply and I don’t really disagree and I take your point about stats/methods not being the same as epistemology. Also, you have a point when stating “has led to numerous serious mistakes and failures in science over the last hundred years”

        There are two points to add in my opinion in relation to Bayes versus Freq. stats (not epistemology):

        1 — When researchers choose whether to use frequentist or Bayesian stats, most methods experts, including Bayes proponents, will suggest the former when it’s impossible to estimate the priors or when past research doesn’t allow for a good approximation.

        2 — Freqentist testing is more than just p values or “this data was probably not caused by chance.” When studies’ are appropriately powered, consider theory-backed covariates in multivariate models, report effect sizes (the assoication between each predictor and dependent/criterion variable), etc. we can get pretty close to good data supporting the alternative hypothesis. But you’re 100% right that the tests themselves are exclusively looking at the null hypothesis (sample to population)

        Reply
        1. On 2, that is covertly Bayesian. Just because the logic of inference is concealed, doesn’t mean it isn’t there. For example, when conclusions are reached about hypotheses being more likely than alternatives (based on such evidence as a study being “highly powered” and so on), the only logically valid way to be doing that is Bayesian, e.g. assuming other causes, fraud, experimental design flaw, etc., which are all competing hypotheses, have low enough priors to disregard. Otherwise, frequency statistics can produce no logically valid argument for why our hypothesis is “more likely” than any of those others (much less more than the entire probability space they must collectively occupy).

          On 1, there is a lot of literature challenging that assumption. It simply isn’t the case that we have no way of producing disciplined priors based on boundary assumptions we are already making that are also data driven (e.g. the known limits on base rates of fraud, experimental error, failure to replicate, theory overthrow, and so on). In fact, if we could not be doing that even covertly, it is logically impossible for any evidence to confirm any hypothesis. Per my point about 2. So the only difference is whether we are being open and honest about our assumed priors and what they are based on, or if we are pretending we are making no assumptions about priors when in fact we always are.

          More on this point here and under “concluding observations” here.

  13. It would not surprise me if I am not in the right spot to get the correct information but I was directed to you when I asked a question about where I can find original documents from the first half of the first century A.D. showing the Roman court documents in Jerusalem.

    Thank you for your time.

    Jim

    Reply
  14. Comment to Steve McRae Adapted from a February 2019 FaceBook Thread: “BD argues straight up anything that lacks a belief is an atheist and does NOT limit it just agents who can evaluate the claim” (and therefore includes rocks as atheists) is analytically valid. It would only be invalid if BD was limiting it to just agents who can evaluate the claim (as then there would be a contradiction: including entities not actually referred to by the term). So the whole debate whether “rocks are atheists” is empirical and not analytical. So a logical analysis is unhelpful here.

    What you and she are actually arguing about, on various different occasions, is whether her expanded-set definition is “more useful” (as well as arguing over what outcome measures make it so) or “more commonly in use” (which I see neither of you doing any empirical work on to determine). Those two arguments are over fact, not logic. Only evidence can determine whether a definition is useful or not by one criterion or another, and only evidence can determine which criterion matters more than another—as in, what people more generally want or need words to do, since “what people want or need words to do” is a question of fact, about people and their wants and needs. Likewise, how a word is more commonly used, is a question of fact (one has to reliably observe or poll linguistic populations and their usage).

    “Agnostic” makes a good test case here. BD argues against the “agnostic as middle” by arguing people “should” adopt the “original” definition of agnostic. She gives no good reason why people should do that; and the observational and statistical evidence today I’m pretty sure conclusively shows the “agnostic as middle” has become the more common definition in use (as happens: word usage changes over time). So she is wrong there not as a matter of logic, but simply as a matter of empirical fact.

    I think the same falls out for her usage of atheist, although I’m getting the impression she is trying to accede to the “rocks are atheist” position to accomplish something else, which seems to be her argument for reviving the original definition of agnostic so as to prevent people from calling her one, which I see being as futile as you wanting to prevent people calling you an atheist, when in actual fact she is an agnostic by some definitions, just as you are an atheist by some definitions, and you should both just get comfortable with that fact and stop trying to construct logical arguments to defy the facts of actual common usage on that point.

    There’s nothing wrong with being an atheist or agnostic by some definition or other. One shouldn’t be so worried over it. All that matters in practice is whether we are using words to an audience in a way that that audience will understand (and therefore we have to speak to them in their language, not our own) and are treating what an audience says by the definitions they use and not out own (and therefore we have to be able to translate their language into ours, and not interpret their language as if it were ours). Any other arguing over the meaning of words is a waste of time.

    Reply

Add a Comment (For Patrons & Select Persons Only)

This site uses Akismet to reduce spam. Learn how your comment data is processed.