You can use the following calculator to run any standard two-hypothesis Bayesian equation (up to a limit of 1 in 100 odds on any variable, and accurate to only two decimal places):
You can see the Bayesian equation itself later. But that equation only calculates the effect of these four probabilities; and below them is shown the outcome, which is the probability that a given hypothesis (H) is true, given the evidence (E) and all our background knowledge (b). All you have to do is enter values on those four sliders to get the result at the bottom, without having to do any of the math yourself. Use the sliders to enter values as probabilities in algebraic notation (e.g. for 60% move a slider until it reads .6: that's the percentage, 60, divided by 100). In this case everything is rounded to two decimal places. And you cannot enter values larger than 99% or lower than 1%. (To work with numbers outside that range you will simply have to go back to the equation and do the math; although for the possibility of working with four decimal places, see the next calculator below). | |
All the calculators employed on this page were developed by Cam Spiers, who offers a variety of others to work with as well. A more advanced calculator page has also been developed by Bill Seymour (see documentation and beta model). For the present page, to understand the symbols, P means probability, and the upright bar represents conditional probability, such that P(H|E) means the probability of H when E is true (as opposed to the probability of H whether or not E is true). H is the hypothesis under test. E is the evidence that H is expected to explain. And "b" represents all your current background knowledge. You should always assume all four values are conditional on background knowledge, so this is shown in all the calculators here. But I will leave that out in the text, as simply being understood. The first variable, P(H), is the prior probability that H is true. The second variable, P(~H), is the prior probability that H is false, which is always 1 - P(H), so the calculator already figures this for you (hence as you move one of the first two sliders, the other automatically moves to match). The other two variables are the probability that the evidence would exist if H is true, which is P(E|H), and the probability that the evidence would exist if H is false, which is P(E|~H). These are called the two consequent probabilities (also known as the conditional probabilities or the likelihoods). Unlike the prior probabilities, they are independent of each other. The result of combining all four probabilities is the probability that H is true given the evidence (and your background knowledge). Here is the same calculator again, but this time showing the actual equation at the top (and in some browsers this version allows you to enter values out to four decimal places):
This is what you would use if you assume all alternative hypotheses fall under ~H. But if you want to you can also distinguish three or more different hypotheses. For example, you would use the following equation and calculator for three competing hypotheses (note that in this case P(H_{1}), P(H_{2}), and P(H_{3}) must always sum to 1, and this calculator ensures that rule is obeyed): | |
Here the sliders provide the prior probabilities across the first line, the consequent probabilities across the second line, and the posterior probabilities across the bottom: the latter being simply the probability that each hypothesis is true. (Note that this calculator is only set to work with inputs up to two decimal places.) | |
To learn more about Bayes' Theorem see my book Proving History: Bayes's Theorem and the Quest for the Historical Jesus or my online PDF tutorial (although total beginners might prefer to start with my Skepticon talk Bayes' Theorem: Lust for Glory!). But in general there are six rules to apply: | |
| |
As an example of applying Rule 1, if the evidence is that your wallet is missing, and you are asking how likely it is that your wallet was stolen, what is the frequency of "your wallet was stolen" the cause of "your wallet is missing"? If your wallet has often gone missing but every time you discovered you had just dropped it or misplaced it, then the frequency of "your wallet was stolen" being true is low, and so must its prior probability be. Unless the conditions are notably different (e.g. lots of things have been stolen around you lately or in that place in particular), in which case you take that into account, too. Analogously, if someone tells you their limbs grew back after having been chopped off, how frequently is "a human's arms grew back after being chopped off" actually the explanation of such evidence (that such a person, with arms and legs intact, would say something like this to you), as opposed to some other explanation being true instead (e.g. "they're crazy," "they're lying," "they're joking," etc.)? The same reasoning applies to asking how frequently a cause like H produces evidence like E (which is the probability that E given H, which is P(E|H)). In answering questions like this you will often need to estimate hypothetical frequencies, e.g. if your wallet has never gone missing, or done so only once (or you never determined how it went missing) your actual database will be too sparse to estimate an actual frequency of causes, but you can use your background knowledge to hypothesize what a larger database would look like, relying on information about the frequency of you dropping or misplacing things, and of thefts occurring in the area, and the physical properties of your wallet and pocket and what you did that day (which can affect the likelihood of it falling out, etc.). Combining this with a fortiori reasoning will produce reliable results (the maximum or minimum probability of H, given all that you currently know), reflecting how you already think. Because Bayes' Theorem is just a mathematical model for all sound reasoning. This page was composed in 2011 and revised in 2012 by Richard Carrier, Ph.D. It is intended as a helpful resource, and accordingly it will likely be revised, updated, or expanded in future. |
The Official Website of Richard Carrier, Ph.D.
Copyright © 2008 All Rights Reserved