I’ve started to accumulate a lot of evidence that consistently supports a singular hypothesis: only those who don’t really understand Bayesianism are against it. Already I’ve seen this of William Briggs, James McGrath, John Loftus and Richard Miller, Bart Ehrman, Patrick Mitchell, Tim Hendrix, Greg Mayer, Stephanie Fisher, Louise Antony, even Susan Haack and C. Behan McCullagh (Proving History, pp. 272-75; cf. pp. 100-03). And I must also include myself: I was entirely hostile to Bayesian epistemology, until the moment I first actually understood it during an extended private argument with a Bayesian epistemologist. And to all this we now must add philosopher of science Peter Godfrey-Smith.

A while back Godfrey-Smith wrote a fairly good textbook called Theory and Reality: An Introduction to the Philosophy of Science (University of Chicago Press, 2003). In it, he has a section explaining Bayesian epistemology, and his reasons for rejecting it “for something else.” Something else…that’s actually just reworded Bayesianism. That he doesn’t notice that, is yet more evidence that he only rejected Bayesianism because he didn’t understand it. And we can demonstrate that’s true by analyzing what he says about it.

Godfrey-Smith on Bayesianism

The relevant section is Ch. 14, “Bayesianism and Modern Theories of Evidence,” pp. 202-18. Godfrey-Smith leads that chapter by pointing out Bayesianism is actually “the most popular” epistemology now in the philosophy of science (p. 203). But he isn’t betting on it, for really just one reason he attempts to then explain.

Can We Avoid Prior Probabilities?

Godfrey-Smith’s principal objection is the problem of subjective priors. Which of course isn’t really a problem—or only is in the same way it’s a problem in every epistemology, which is no harder to “solve” in Bayesian epistemology than in any other (see my discussion in Proving History, pp. 81-85). But as Godfrey-Smith puts it, “The probabilities” in Bayes’ Theorem “that are more controversial are the prior probabilities of hypotheses, like P(h),” because, he asks, “What could this number possibly be measuring?” He complains that no one can “make sense of prior probabilities” (p. 205). That’s false; and entails he does not understand Bayesian epistemology, and cannot have really read much of what Bayesians have already said about what priors measure and how they are determined.

All prior probabilities are just previously calculated posterior probabilities. We usually just skip the long arduous math of running calculations from raw, uninterpreted experience all the way to worldview conclusions (like that solid objects and other minds exist), and thence to background knowledge (like the laws of physics and human behavior), and thence to specific starting points in reasoning about the base rates of phenomena. We just skip straight to the base rates of phenomena, and construct all our priors from observed physical frequencies. In particular, they are the frequencies drawn from how things have usually turned out in the past. (As I’ve discussed many times before, especially in respect to Burden of Proof, Worldview Assumptions, Fabricating Crank Priors, and guides to How to Reason Like a Bayesian, and why No Other Epistemology can validly Avoid Prior Probabilities.)

The famous “Mammogram Example” illustrates the point: doctors were dangerously over-prescribing mammograms because they were ignoring the role of prior probabilities. They ignored the rate of false positives and its relationship to the base rate of even having cancer in the first place, resulting in recommending so many mammograms that false positives vastly outnumbered real cancers, resulting in a rash of needlessly expensive and risky procedures, likewise causing (sometimes deadly) patient stress. Here the priors were constructed from hard data on base rates of cancers and false positives in mammograms. No one could be confused as to what the “prior probability could possibly be measuring” here. It’s measuring the base rate of the proposed phenomenon: having breast cancer (see Visualizing Bayes’ Theorem). And this is what priors always are: an estimate of the base rate of whatever phenomenon is being proposed. The only difference is sometimes we don’t have good data and thus our estimates are closer to even odds; or we have so much data showing only one result, it’s easier to just operate as if the contrary result has a prior of zero. But either way it’s still just an empirical estimate of a base rate, a frequency.

And that means whatever the hypothesis h is, as distinct from ~h (which means the falsity of h and thus the reality of some other cause of all our pertinent observations). For example, h “Jesus rose from the dead” vs. ~h “false tales and beliefs arose of Jesus having risen from the dead” are competing hypotheses. Their prior probabilities will be the most analogous corresponding base rates for this kind of question: Whenever we’ve been able to find out in any other case, how often, usually, do we observe these kinds of miracle tales and beliefs turning out to be true, rather than false? The answer is “never h” and “always ~h to the tune of thousands of times.” Worse, all the entities and powers and motivations required of h themselves have absurdly low base rates in past observation: we’ve confirmed no gods, and no supernatural powers capable of such things, nor exhibiting any such motives. Hence, “What is the prior probability measuring?” Answer: “The previous frequency of prior like outcomes.” Always and forever.

There is nothing mysterious about this. And it is definitely false to say, as Godfrey-Smith does, that “no initial set of prior probabilities is better than another” and that “Bayesianism cannot criticize very strange initial assignments of probability” (p. 209). I have no idea where he got either notion. No Bayesian I have ever read has said any such thing. And he neither quotes nor cites any doing so. To the contrary, Bayesians have an obvious objection to “strange priors”: they do not follow from the background data. You don’t get to just “invent” data that doesn’t exist; nor can the data exhibit a frequency, and you get to ignore that and declare a different frequency anyway. The prior probability in the mammogram case is simply the prior observed base rate of breast cancer. That’s a fact in b, the background knowledge every probability in Bayes’ Theorem is necessarily conditional on. If b contains data showing that frequency to be 1 in 1000, you cannot just “declare” it to be 999 in 1000. If the data show nothing other than 1 in 1000, it’s 1 in 1000. End of story. Anything else is a violation of evidence and logic.

Even frequencies that have less data to determine them by do not escape this consequence. If, for instance, we have literally no evidence showing a higher or lower frequency for h-like hypotheses over ~h-like hypotheses, then there is no logical argument by which you can declare P(h) to be anything other than 0.5. Because doing so logically entails asserting there is evidence of a different frequency than that. Conversely, you cannot assert a different frequency unless you have evidence of one in b. Even for vague frequency data, when we have to use ranges of probability to capture the fact that we do not know the prior is more likely to be any probability in that range than another, but is more likely to be one of those probabilities than any outside that range: we are still constrained in this by data. Even the total absence of data constrains us, forbidding bizarre priors by the very fact that the total lack of information renders them a priori improbable. No matter what information you have or lack, it always entails some base rate, or range of base rates, or indifference between base rates.

Consider Godfrey-Smith’s own inept example: he says there is nothing to tell us that the prior probability of “emeralds are green” is higher than of “emeralds are grue” (p. 210). By “grue” he means (for those unfamiliar with the conceit) an imaginary color defined as “the property of being green before [some specific date, whether in the future or in the past], and blue afterwards” (or vice versa). Bayesians, he claims, cannot justify a low prior for this. Which is absurd; and tells me Godfrey-Smith either hasn’t thought this through, doesn’t know anything about physics, or doesn’t understand how priors are conditional on background knowledge, b.

The reason that “emeralds are grue” has a much lower prior than “emeralds are green” is that in all our background knowledge, (a) no like property to grue has ever been observed and (b) no physics have ever been discovered by which a grue-effect would even be possible—despite by now our extremely thorough exploration of fundamental physics, including the reaction properties of materials to photons of a given frequency. No such effect is plausible on known physics, even in individual objects (apart from a corresponding physical transformation of the material), much less simultaneously among all objects “across the whole universe.” That any object is grue is an excellent example of an extraordinary claim: a claim contrary to all known physics and observation. Which necessarily thereby entails an extraordinarily low prior. Not one equal to “emeralds are green.”

Even in neurophysics, a human brain spontaneously swapping out two completely different neural circuits for producing color experience likewise doesn’t just “magically happen,” although even if it did, that would be a change in our brains, not in emeralds. There actually may well be people who experience blue when seeing emeralds, owing to a birth defect in the wiring of the eyes and brain (color inversion is a known anatomical defect), and we may even someday be able to surgically cause this, but that’s not the idea meant by grue. Because it has, again, no relation to the properties of emeralds. And in any event, we have a pretty good database of knowledge by which to estimate the prior probability of these kinds of neural defects and changes, and it certainly doesn’t register as high. It’s just not the sort of thing that usually happens.

All of which also means there is no epistemology with any chance of reliably accessing reality that can ignore prior probabilities. The prior observed base rate of a proposed phenomenon as an explanation of any body of evidence is always relevant information, and always entails limits on what we can reliably deem likely or unlikely. Any epistemology that ignores this will fail. A lot. And indeed, our brain literally evolved to use this fact (albeit imperfectly) to build increasingly reliable models of reality. All of our beliefs derive from assumptions about base rates; and indeed many of our cognitive errors can be explained as failures to correctly estimate them (and thus can only be corrected, by properly estimating priors).

Since we cannot avoid prior probability assumptions, the fact that we can only subjectively estimate them is not a valid objection. It is simply restating a universal problem we’ve discovered to be innate to all epistemologies: humans cannot access reality directly; all perception, all estimation, all model-building, all model-testing, all belief-formation is necessarily and unavoidably subjective. We can only access what our brains subjectively and intuitively construct. Our only useful goal is to find ways to get that subjective theatre to increasingly map correctly to the external world. Accordingly there can be better epistemologies and worse, as measured by the degree to which resulting predictions bear out (our discovered “failure rate”). And so we can be more or less objective. But there can never be a literally “objective” epistemology. And as this is true of all epistemologies, it cannot be an objection to any of them.

Not Understanding Subjective Probabilities

Godfrey-Smith not only fails to grasp what Bayesian prior probabilities are (and thus how they are constrained and in fact determined by data and thus not, as he seems to think, “mysterious” or “arbitrary”), he also doesn’t grasp what subjective probabilities are in general, the “epistemic” probabilities that Bayesian epistemology traffics in. When “subjectivists” talk about probability measuring “degrees of belief,” he, like all frequentists who screw this up, mistake them as saying probabilities aren’t frequencies.

Godfrey-Smith names two founders of subjective probability theory, Bruno de Finetti and Frank Ramsey. But then like pretty much every author I have ever seen arguing about “frequency” interpretations of probability as somehow standing against “subjectivist” interpretations of it, he clearly didn’t actually read either (nor evidently any probability subjectivist) on the matter of what the difference actually even is.

We can tell this by noting right away how all these critics, Godfrey-Smith included, incorrectly think the subjectivist interpretation is contrary to the frequentist one, that they are somehow fundamentally different approaches to probability. In fact, subjectivism is a sub-category of frequentism. Subjectivists are frequentists. And anyone who doesn’t understand that, doesn’t understand the concept of subjective probability. Subjective probabilities are simply estimated frequencies of error in belief-formation, which is still interpreting probability as fundamentally always a frequency of something.

The only actual difference between so-called subjectivists and so-called “frequentists” is that the latter think the only probabilities that we can talk about are “objective” ones, the actual frequencies of things in the world. When in fact we can never know those frequencies. Just as we can never know anything about the world except through our subjectively constructed models and perceptions of it. There is no direct access to reality. Just as there is no direct access to “real frequencies” in the outside world. Everything is mediated by subjective experience. Even God could not escape this; for even he can never know for certain his perceptions match reality apart from them. It’s a logical impossibility.

Once we abandon the impossible, all we have left is our subjective models and whether and how we can test them against the outside world to make them more accurate in matching it. This is true in every aspect of epistemology, including probability. We only ever know our subjective estimates of external real-world probabilities; and that those estimates can be increasingly more accurate, the more information we have pertaining to them. But we can never reach any point where we are 100% certain our estimates of frequency exactly match reality.

The most formally important of Godfrey-Smith’s cited sources is Bruno de Finetti, whose “Foresight: Its Logical Laws, Its Subjective Sources” was published in Annales de l’Institut Henri Poincaré 7 (1937). On page 101 therein, de Finetti spells it out:

It is a question simply of making mathematically precise the trivial and obvious idea that the degree of probability attributed by an individual to a given event is revealed by the conditions under which he would be disposed to bet on that event

He describes what he means on the next page: “Let us suppose that an individual is obliged to evaluate the rate p at which he would be ready to exchange the possession of an arbitrary sum S” upon “the occurrence of a given event E,” then “we will say by definition that this number p is the measure of the degree of probability attributed by the individual considered to the event E” and hence “p is the probability of E” according to that individual. Note his words here: “the rate.” That means frequency. So even de Finetti, the famous founder of probability subjectivism, defines probability as a frequency. He only differs from other frequentists in realizing that the frequency we are talking about is actually only ever the frequency of our being right or wrong about what’s in the world; not the actual frequency of things in the world.

Then de Finetti shows you can build this out to satisfy all the formal requirements of a coherent theory of probability. And he’s right. He eventually goes on to ask if “among the infinity of evaluations that are perfectly admissible in themselves,” meaning every possible subjective assessment that is internally coherent (as every individual observer may have their own estimates), is there “one particular evaluation which we can qualify…as objectively correct?” Or at least, “Can we ask if a given evaluation is better than another?” (p. 111).

He then shows that all this subjective estimating is really just an attempt to use limited and imperfect information to approximate the real objective probabilities of events. For instance, when the data is extremely good favoring a certain objective frequency of “betting correctly,” subjective frequency estimates become essentially identical to the objective frequency supported by observations. And this is the more the case, the more information one has regarding those real probabilities.

This is exactly what I argue and demonstrate in chapter six of Proving History (esp. pp. 265-80, “Bayesianism Is Epistemic Frequentism”; which benefits from having read the preceding analysis of objective probability, pp. 257-65). There actually is no difference between “frequentist” and “Bayesian” interpretations of probability. The claim of there being a difference is a semantic error, resulting from failing to analyze the actual meaning of a “subjective probability” in the empirical sense, that is, what it is people actually mean in practice: how they actually use it, and what they are actually doing when they use it.

An epistemic, or subjective, probability, is as de Finetti describes: an individual’s empirical estimate of the frequency they will be right to affirm true some hypothesis h, given a certain measure of evidence. That’s a frequency. Thus all “degrees of belief” are still just frequencies: the frequency of being right or wrong about a thing given a certain weight of information. And as such, probabilities stated as “degrees of belief” always converge on “actual objective” probabilities as the weight of available information increases. The only difference is that “degrees of belief” formulations admit what is undeniable: our information is never perfect, and consequently we actually never can know the “objective” probability of a thing. We can only get closer and closer to it; with an ever shrinking probability of being wrong about it, but which probability never reaches zero. (Except for a very limited set of Cartesian knowledge: raw, uninterpreted, present experiences, which alone have a zero probability of not existing when they exist for an observer, since “they exist for an observer” is what a raw, uninterpreted, present experience is.)

This is obvious in practice (as I also show with many examples in Proving History). When a gambler has a high confidence in the honesty of a certain game, their subjective estimates of the probability of winning a bet are essentially synonymous with what they plainly observe to be the objective probability of winning that bet. They are never exactly identical, because no matter how high the gambler’s confidence, there is always some nonzero probability they are missing some crucial piece of information—such as would indicate the game is in fact rigged, or that the physical facts affecting the real frequency of outcomes is different than observed (e.g. a slight defect in a die or deck unknown to the casino).

And the same reality iterates to every other case in life, which really does reduce, as de Finetti says, to yet another gambling scenario. We are betting on conclusions. We aren’t necessarily wagering money, but confidence, trust, and so on, but it’s still just placing bets on outcomes. And our estimates of the likelihood of winning or losing those bets, is still synonymous with what we mean by subjective probability. So subjective probability is just another frequency. Frequentists and subjectivists are talking past each other, not realizing they are actually just talking about different frequencies: the “frequentist” is focused on the real, actual frequency of a thing; the “subjectivist” is focused on the frequency of being right or wrong about that thing. But the subjectivist is correct that we never actually know “the real, actual frequency” of a thing; only ever (at best) the frequency of being right or wrong about it.

There are some “real, actual frequencies” we have nearly close to actual knowledge of, but even those we don’t really know, owing to small probabilities of our information being wrong. Frequentists become seduced by this into forgetting every probability they are sure they “know,” always might be incorrect. The subjectivist is thus using probability in the only sense actually coherently applicable to human knowledge. The frequentist is living in a fantasy world; one that often coincides with the real world just enough to trick them into thinking they are the same.

Why “Frequentism” Is Defective as a Methodology

As for example when the frequentist insists only “frequentist statistics” is valid, oblivious to the fact that no frequentist methodology accounts for even known (much less inevitably unknown) rates of fraud and error in those very methods. All frequentist methodology can ever determine (if it can even determine this at all) is the probability of random chance producing the same data, the so-called null hypothesis. Determining a probability of excluding the null hypothesis or not only tells you the probability that such data could be caused by chance. But that can never tell you the hypothesis you are testing is probable—because there are many hypotheses that could explain those same data besides random chance.

The frequentist method can’t even determine that the data was produced by chance; all it can do is tell you whether the data could have been produced by chance. But on the reverse side, neither can it tell you any tested hypothesis is probable, because it does not even account for competing hypotheses. Including fraud, experimental error, or unknown or unconsidered causal factors and models. There is no way to account for these other possibilities except by Bayes’ Theorem (or methods reductively identical to it). Because there is no way to affirm any hypothesis is probable, without affirming a probability for all these other causes. And only some form of Bayesian reasoning can validly get you there.

This is because it is logically necessarily the case that the probability of any hypothesis h can never be other than the converse of the sum of the probabilities of all other possible causes of the same observations (including, incidentally, chance). So it is logically impossible to “know” the probability of h without thereby claiming to “know” the probability of every alternative within ~h. And those latter probabilities can only be known to an approximation. Which means, ultimately, subjectively.

Thus frequentism as a methodological ideology is logically defective and can never produce human knowledge of almost any useful kind. Whereas the only useful features of frequentism—the formal logic of probability and its associated mathematical tools, and the frequentist interpretation of probability—is already fully subsumed and entailed by Bayesianism. Frequentist methodologies are still great for determining the probability that a data set could be produced by chance “assuming nothing else is operating” (no fraud, no experimental error, no sampling error, and so on—which can never be known to 100% certainty). But it has literally no other use than that. And human knowledge cannot be built on such limited information. “This was probably not caused by chance” does not get you to “This was therefore caused by h.”

What frequentists don’t realize (and this isn’t the only thing they don’t realize) is that when they infer from their results about null hypotheses that some specific causal factor or model is operating—in other words, that we can claim to know some specific hypothesis is true (even just that it isn’t “random chance”; or indeed even that it is)—they are covertly engaging in Bayesian reasoning. They assume, for example, that the prior probability of fraud or experimental error is low, low enough to disregard as a trivial probability of being wrong. They assume, for example, that the likelihood ratio would, if someone were to complete it, strongly favor the hypothesis they are now claiming is true, rather than still leave a problematically high probability that some other hypothesis is causing the evidence instead. These are all Bayesian assumptions.

And the only logically valid path from “we got a frequentist result x for h” to “hypothesis h is probably true” is Bayes’ Theorem. It’s just all going on subconsciously in their heads; so they don’t realize this is what their brain is doing; and they aren’t taking steps to analyze and unpack that inference in their head to test whether it is even operating correctly. Which means Bayesians are actually less subjective than frequentists; for Bayesians at least admit we need to account for, and objectively analyze, how we are getting from A to B. Frequentists just leave that to unanalyzed, unconscious, and thus entirely subjective inference-making.

We get the same outcome when we go back and look at Godfrey-Smith’s other named source for the theory of subjective probabilities, Frank Ramsey, who published three seminal papers on the subject, starting with “Truth and Probability” in 1926, which was published in his 1931 treatise The Foundations of Mathematics and other Logical Essays (all three papers are available in combined form online). There he does, less formally, the same thing de Finetti did, and equate subjective “degrees of belief” to bet-making—hence reducing degrees of belief, yet again, to frequencies. Only, again, frequencies of being correct or mistaken about a thing, rather than the actual frequency of that thing. Because the latter is always inaccessible to us, and only capable of increasingly accurate approximation.

Indeed, as Ramsey says (on p. 174 of Foundations): “This can also be taken as a definition of the degree of belief,” that an individual’s “degree of belief in p is m/n” whereby “his action is such as he would choose it to be if he had to repeat it exactly n times, in m of which p was true, and in the others false.” This is literally a frequency definition of subjective probability. So any frequentist who claims subjectivism isn’t just another form of frequentism (and indeed one that is more accurately describing human knowledge) literally doesn’t know what they are talking about, and thus cannot have any valid criticism of subjectivism, certainly not on the grounds that probability is always a statement of frequency. The subjectivist already fully agrees it is. And what criticism then remains? That we have direct access to objective probabilities and therefore can dispense with subjective estimates of objective probabilities? In no possible world is that true.

It also follows that no amount of complaining about “but different people might come to different estimates or start with different assumptions” can be an objection to Bayesianism either. Because the same complaint applies equally to all epistemologies whatever. The solution is the same in all of them: disagreements must be justified. They who cannot justify them, are not warranted in maintaining them. And that means justifications that are logically valid, from premises that are known to a very high probability are true (and that includes premises about the facts we are less certain of, e.g. that we are reasonably certain we are uncertain of some fact x is itself data, that also logically constrains what we can assume or conclude). Everything else is bogus. In Bayesian as much as in any other epistemology. (I thus discuss how epistemic agreement is approached among disagreeing parties in Proving History, e.g. pp. 88-93.)

Godfrey-Smith’s “Fix”

Admittedly, when he gets around to articulating what he would replace Bayesianism with (pp. 210-18), Godfrey-Smith concedes “it is not clear which of these ideas are really in competition with Bayesianism, as opposed to complementing it” (pp. 210-11). Indeed. Not a single idea he proposes is anything contradictory or complementary to Bayesianism: his fixes are Bayesian! That he doesn’t know that, confirms my hypothesis: he only objects to Bayesianism because he doesn’t understand it.

Godfrey-Smith first argues correct epistemology must operate by “eliminative inference,” i.e. ruling out alternatives (pp. 212-13). But here he even admits John Earman has provided a Bayesian framework for that, and further admits that framework might make the most sense of the logic of eliminative inference. Indeed I think it is the only framework that does, as in, the only way to construct a logically valid argument to the conclusion that one hypothesis is more likely, when all known competing hypotheses are shown to be less likely. That’s indeed one of the most important lessons of Bayes’ Theorem: that theories can only be validated by trying really hard to falsify them, and failing.

Verification doesn’t work; except when it consists of failed falsification. Because the only way to increase the probability that h is true above all known competing hypotheses, is for the probability of the available evidence to be greater on h than on any known competing hypothesis. And that’s simply the likelihood ratio in Bayes’ Theorem. Likewise that evidence cannot merely be more likely; it has to be more likely on h, by as much as the prior probability of ~h exceeds that of h—and yet even more, if you want confident results and not merely a balance of probability. But even to just get to “probable,” when h starts with a low prior, you need even more improbable evidence. As demonstrated with the mammogram case: you need exceedingly good tests for cancer, to overcome the very low chance of even having cancer, otherwise you end up with far more false positives than true results. And that’s as true in that case as for any other epistemic problem: if your methods are going to give you more false positives than correct results, your methods suck. Hence there is also no way of escaping the need to account for base rates, and thus prior probabilities. It’s Bayes’ Theorem all the way down.

The important point though, is that the only way to “rule out” a hypothesis is to reduce its epistemic probability. Because you can never get that probability to zero. There is no such thing as deductive elimination. There is always some nonzero probability, for example, that even when you see an elephant, “eliminating” the theory that there is none, you are wrong—you saw an illusion or hallucination instead, and the hypothesis that there is no elephant hasn’t been eliminated after all. And this is true for every possible observation you think eliminates some theory. So it’s always, rather, a question of how unlikely such “theory rescuing” conditions are, given the information you have (e.g. the base rate of such illusions or hallucinations, given all your background information about your situation and condition and so on). Hence, still Bayes.

Godfrey-Smith next argues that good methods must be correlated with reliable procedures, what he calls “procedural naturalism.” This is also just a restatement of a Bayesian principle. For a “reliable procedure” is simply any procedure that produces evidence unlikely on the disconfirmed theory and likely on the confirmed one—that is literally what a reliable procedure is, by definition. And the more reliable, the more it diverges those two probabilities. For instance, procedures that reduce the prior probability (the base rate) of fraud to a very low level, are more reliable than those that don’t precisely because they reduce the probability that the evidence is being observed owing to fraud and not the tested hypothesis. Otherwise, the evidence is too likely to be the result of fraud to trust it as evidence for the hypothesis. And so on, for every other source of observational error.

This is also a really important lesson we learn from Bayes’ Theorem: weak tests, do not make for strong conclusions. There is a tendency for people who want to go on believing false things, to “test” them by using weak methods of falsification rather than strong ones, so they can claim their belief “survived a falsification test.” But a falsification test only makes your belief likely if that test has a really high probability of exposing your belief as false if it is false. If it is likely to fail to find a false belief even when a belief is false, then it is not a reliable procedure. And as it leaves the evidence likely even on ~h, surviving the procedure is no longer evidence against ~h.

So all questions about “procedure” simply reduce to the effect a procedure has on the Bayesian priors or likelihood ratio.

Conclusion

None of Godfrey-Smith’s objections to Bayesianism evoke a correct understanding of it. He falsely believes prior probabilities are not constrained by background information—even though that’s what the “b” means in P(h|b): the Probability that h given b. Not “given any willy nilly assumptions you want to make.” He also falsely believes subjective probabilities (“degrees of belief”) are not frequencies. And he doesn’t know what they are frequencies of; or how this demonstrates the supremacy of Bayesianism over all other constructions of probability in respect to actual human knowledge (which can only ever achieve knowledge of subjective probability). And his “solution” is to propose principles already entailed by Bayesian logic: that evidence has to make alternative hypotheses less likely, that the only way to do this is to collect evidence that’s unlikely to be observed on alternative hypotheses, and that reliable procedures are by definition those capable of doing that.

Consequently, Godfrey-Smith would be a Bayesian, if ever he correctly understood what Bayesianism was and entailed. And as I have found this to be the case a dozen times over, in fact in every case I have ever examined, I am starting to suspect this is the only reason anyone ever isn’t a Bayesian.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading