What do you get when you combine atheism, mathematics, evolutionary psychology, and the science of cognitive biases? Critical thinking skills! I discuss many aspects of this in my online course on critical thinking. (It’s easy, useful, and affordable, so consider registering, or let others know who might be interested. I repeat the course once or twice a year. Follow the The Secular Academy for announcements.)

I’ve railed against EvoPsych before. Like philosophy, IMO, most published EvoPsych is pseudoscientific garbage (or at best over-hyped bad philosophy). But some of it’s not. I mentioned one of those domains before: cognitive biases.

Not all of those as yet discovered have been confirmed to be cross-cultural and genetically based. Nor have many yet been reliably proved to be the result of any hypothesized selective evolutionary process—however plausible some hypotheses may be, merely plausible hypotheses are not yet scientific facts; they are, at best, just good philosophy (and yes, philosophy is just science with less data). But even many of those biases that haven’t been verified to be universal or genetic are likely to survive such tests, and their selective causes are likely to bear out in genetic and selection modeling—though, again, those tests still need to be conducted before that’s actually scientifically the case, rather than just a plausible philosophical premise. We can conclude this, though, because the nature of them does not appear to be cultural. They do not involve complex cognition, but operate at the level of simpler sub-cognitive processes; and they reflect universal rather than culturally structured or trained reasoning.

And a lot of these cognitive biases have to do with mathematical mistakes our brains probably evolved to make, because our brains didn’t evolve to do pure math in the first place. And that we know on both scientific evidence (of our brains being bad at it without training, and sometimes still bad at it even with training) and precedent (the evolution of a peffect mathematical computer in our brains is extremely improbable on existing scientific evidence). As the evidence of history and child psychology proves, mathematics is a software patch we have to invent and install in a brain culturally, through education and training. And with that patch we can work around the errors our brains evolved to make. That’s what we invented formal mathematics for—as well as logics, and the counter-intuitive scientific method (which, for example, counter-intuitively proves things true by trying to prove them false).

Which, incidentally, disproves intelligent design. As I pointed out near the end of my chapter on the design argument in The End of Christianity, an intelligently designed brain wouldn’t need such a software patch; nor would it take an intelligently curated species tens of thousands of years to finally stumble on and develop those patches by themselves with no help from anyone. And indeed, not only did these bug fixes get stumbled upon and thus developed by only a few cultures independently of each other (rather than all cultures simultaneously), but none of those were worshipping any gods still worshipped much today; and none credited their discovery to revelation. By contrast, none of these software patches…math, logic, or scientific method…were ever mentioned in any holy text in any religion, much less the major religions of today—at all, much less as something valuable and essential to good conduct and a successful life. Religions clearly, therefore, have never had any line of communication with any being who knew one iota of mathematical, logical, or scientific methods. Which rules out pretty much every worshiped god today. Sorry, Christians and Muslims. You’re gods just don’t exist.

Your Brain Sucks at Probability

But the fact that belief in any major religion is illogical now is something any good critical thinker already knows. What many don’t know is the role math literacy can play in fixing our innately screwed up brains.

Evolution didn’t build us to be good reasoners. It built us to be good problem solvers and social animals. And since “good enough” was the only target evolution had to hit, it didn’t go much beyond that. We, now, applying the limited skills evolution gave us—to innovate and employ new technologies and adapt to our environment culturally—can go beyond that with intelligent design—just as we go beyond evolution with spears, helmets, glasses, airplanes, and artificial hearts. Our intelligent design—flawed as it is. But the evidence of observation tests and thus confirms when we’ve hit on something correct, which is how we know math, logic, and science work better than our unaided brains. Just as we discovered for spears, helmets, glasses, airplanes, and artificial hearts.

(I discuss the epistemology of this in Sense and Goodness without God II.3.1, which I expand on in Epistemological End Game; on attempts by Creationists to use our badly designed brains as evidence for god, illogical as that may be, see my discussion in my Critique of Reppert, in the section there linked, and the other sections linked, as related to it, in that section’s first paragraph.)

There are a lot of demonstrated cognitive biases in probabilistic reasoning. And probabilistic reasoning is inherently mathematical. So math literacy is the only way to correct them. These innate errors in probabilistic reasoning even explain why people are so prone to use and believe many varieties of fallacious reasoning. Many fallacies are, for example, errors in Bayesian reasoning, such as mistaking a weak Bayesian argument for a strong one—whether we know anything about Bayes’ Theorem or not (see e.g. Less Wrong and Dia Pente).

The most well-known cognitive bias in probability reasoning is the gambler’s fallacy (assuming a string of good luck increases the probability of bad luck coming, or vice versa) and it’s inverse (assuming good luck can only have come after a string of bad luck, or vice versa) or reverse fallacies (like the “hot hand” fallacy of assuming a string of good or bad luck are more likely to continue), which all stem from the fact that the human brain did not evolve any accurate means for distinguishing random sequences from patterned ones, and is prone to finding or assuming patterns that don’t actually exist (and that belief in which isn’t even warranted by the data, either). Arguably, up to a certain limit, missing patterns is more dangerous to differential reproductive success than seeing patterns that aren’t there, so any brain is likely to be tweaked by natural selection toward just this sort of error.

Other biases in the same category include violations of the conjunction rule—thinking a conjunction of events is more likely than either event alone—due to, for instance, availability and representativeness heuristics (for their connection to errors in the conjunction rule, see Tversky & Kahneman 1983 and What Is an Availability Heuristic?). Much is made of the famous example of Linda the banker, where people assume it’s more likely that Linda is both a banker and a feminist than that she is a banker, even though that’s impossible; though the mistake becomes less common when the question is worded differently (such as in a way that makes their relative frequency explicit), still a lot of people continue to make it.

And there are other mistakes we make in estimating frequencies along these lines that are even better established. For example the regression bias, whereby we remember rare events as more frequent than they really are and common events as less frequent than they really are, which explains, among other things, why we frequently fall for the fallacy of crediting changes in events to correlated accidents (e.g. if we get better after taking a pill, we assume the pill caused us to get better, even though statistically we would have gotten better anyway: as I’ve written about before in Everything You Need to Know about Coincidences), which we call a post hoc fallacy (another form of over-detecting patterns).

Other cognitive defects in our probability reasoning include probability neglect, the subadditivity effect, the ambiguity effect, and so on. Many other examples of cognitive defects in our ability to reason probabilistically, all of which we have to control for if we are to be good critical thinkers and not just slaves to innate bias and error, are catalogued and discussed in very handy manuals like Cognitive Errors in Medicine.

It seems evident that the human brain, not being innately a mathematical computer, evolved rough-and-ready rules for estimating likelihood and frequency that are frequently wrong, but often enough right to give us an advantage. And indeed, one thing that consistently comes up in studies of these probability biases is that humans perform and reason better when they frame everything in terms of ratios, rates, and frequencies, rather than percentages and “probabilities” in the formal sense. So one rule to follow: When thinking about risks and probabilities, always try to think in terms of the rate of a thing, the frequency of a thing, the ratio of one quantity to another over an acknowledged period of time.

A Pernicious Problem

The pernicious problem with these kinds of biases is that they affect us because the underlying process generates an emotional feeling of rightness and confidence in the intuited result, even though the intuited result is wrong. We then erroneously rely on the affective fallacy of assuming that because our intuition feels right (or isn’t giving us any feeling of uncertainty or suspicion), that therefore our intuition is correct and any claim to the contrary must somehow be incorrect.

I’ve noticed this commonly happens with the historian Bart Ehrman, where he openly rejects, even mocks, logical and probabilistic analysis because his intuited feelings about what’s logical and probable just feel right to him and any actual formal demonstration to the contrary must therefore be some sort of sophistry. That’s what makes cognitive biases dangerous. And why so many people have so many false beliefs, and not merely that, but an excessive confidence in them as well. (And this is just as true of atheists as theists, BTW; you all have the same brains, and are not immune to being seduced by the same fallacious certitude.)

Illusions of Probability

Example. Right now the murder capital of the world is Caracas, Venezuela, at a rate of 119 people murdered every year per 100,000 people. Even Acapulco, Mexico, comes in at 104. So the worst murder rates for any community on earth are around 100 per 100,000. (Chicago, BTW, despite its infamous reputation, only has a murder rate of around 20 a year per 100,000. Even the deadliest U.S. city, St. Louis, clocks in at only 60.)

Some actual scientists have claimed that the bushpeople of Africa never commit murder, that they are amazingly peaceful, and therefore there is, we are told, something we can learn from them. And this is based on observations like, “In the tribe I studied, I never saw a murder in 50 years.” Let’s test that with some numeracy.

Let’s assume the murder rate for African bushpeople is 1 every 100 years per tribe, and therefore no one should expect to observe a single murder in a “mere” 50 years. In other words, we will assume these observations are typical and not statistical flukes. Which is reasonable. (By definition, absent evidence establishing the contrary, an observation is always more likely to be typical than extraordinary.)

The tribes in question are never larger than 1000, and typically far smaller. But even at that size, 1 murder every 100 years per 1000 persons, converted to the 100,000-person benchmark, is a murder rate of 100 per 100,000!

Oh no, it’s not! When I first published this, that’s what I said. And my math was wrong. Which illustrates another aspect of critical thinking: check your math. (And of course, correct yourself when you’re wrong.) The correct result is 1 murder per year per 100,000, which is the rate in a typical non-American country like Germany or Australia or Japan.

In other words, African bush tribes could be no less violent and murderous than any other modern advanced society…and still you’d only see one murder per tribe every 100 years or so. Thus, that scientist’s observation, “I didn’t see one murder in that tribe in 50 years,” is not evidence of some remarkable bush-tribe peacefulness. It’s exactly the same thing they’d have observed if African bushpeople were as murderous as the Germans, Australians, or Japanese. Even Americans… We have 4 murders a year per 100,000; so among 1,000 at the same rate we would see on average 1 murder every 25 years, but with small populations like that the number occurring in each actual unit of time fluctuates a lot, so not seeing one in 50 years could easily just be a chance accident. Or it could be spot on…and reflect an American community like Sunnyvale, California—which is back to exactly that 1 murder per 1000 per 100 years.

This exemplifies scientific reasoning: instead of looking for verifying evidence, like observing “no murders” and concluding the murder rate is low, we try to disprove our assumption by tesing the contrary hypothesis. This requires some basic math. And both facts together exemplify Bayesian reasoning, which requires us to ask: What observations should we expect if our hypothesis is false? In other words, if bushpeople were just as murderous as any other peaceful people on earth, how many murders would we observe?

Our intuition wants to convert what we observe (no murders) into a verification of the hypothesis it seems most obviously to support (no murderers). Producing verification bias. Math literacy corrects the error. But only when we recognize the Bayesian reasoning inherent in the scientific method, and thus ask what we should expect to observe if the hypothesis is false.

Conclusion

This is just a sample of the kind of skills and information I’ll be teaching in my Critical Thinking course (so watch The Secular Academy for when next it’s offered). Being aware of our innate cognitive biases, and how to correct for them, when we can, with basic math literacy (at least sixth grade level is nearly all you need: which you can learn entertainingly in Math Doesn’t Suck), especially the ability to ascertain and then convert frequencies and then use them for hypothesis testing, are all aspects of the new Critical Thinking skills of the 21st century we should all be better at and have more resources on hand for.

Even beyond what I’ve talked about here, math literacy, basic numeracy, is essential to being a good critical thinker. Quite simply,  if you are math illiterate, you cannot be a good critical thinker. I’ve given more examples of the use of math literacy to improve sound thinking, indeed how to be a better atheist and even a better feminist, in Innumeracy: A Fault to Fix.

There are several excellent books on the intersection of numeracy and Critical Thought:

For some of the leading neuroscience of how our brains encode mathematical reasoning:

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading