Previously, I covered the basics of advances in critical thought for the 21st century (see Resources for Critical Thinking in the 21st Century). We can apply those tools to traditional areas of secular activism, such as criticizing religions, ideologies, or pseudoscience. But we can also, and should also, apply them to ourselves and our own missions, assumptions, and methods. This is about that.
In conjunction with these expanded notes, review my accompanying slideshow. Also note that the text of this document was updated a few days after my associated talk was delivered, to include overlooked caveats and side-points, thanks to useful comments and questions I received afterward.
First, note the final list of bullet points I concluded that more generic discussion with, and now consider them as advice for running an organization, both in respect to dispute resolution (internal and external) and resource allocation (like deciding which goals to prioritize or devote more or less attention to)...
The Principles of Critical Thought:
- Questioning information rather than merely receiving it (trust but verify).
- Constant skill applied to all knowledge and belief (not to be compartmentalized).
- Not an exercise; a tool for belief testing and filtering (defense against false beliefs).
- Must be applied to yourself as well as others (self-question, self-test, self-critique).
- Not radical skepticism (work out when information is enough to settle a conclusion).
- Step 1: Check the facts (check multiple sources and evaluate their reliability).
- Step 2: Check for biases and fallacies (your own and those of others).
- Step 3: Consider alternative explanations of the evidence and test them.
- Find the best defenses of either side of a dispute and compare them.
- Consider your existing background knowledge and endeavor to acquire more of it.
- Rely on facts & evidence, not assumptions.
- Update your beliefs when evidence goes against them.
- Restate all your beliefs as probabilities; then justify those probabilities
(or change them if you can’t).
It might already be clear, once you’ve looked at that list from this very different perspective, how being logical isn’t just for debunking religion and nonsense. It’s also useful for improving your group’s ability to work together, solve internal problems, and achieve external goals. Here are some tips on that...
Logic Requires Facts: Logic will lead you to the basics of making an organization function effectively. There are already many resources on how to do that available from SSA national, and in the many talks and workshops at conferences like this.
I won’t be duplicating that. I also won’t say much about practical principles for maintaining civil discussion within your group, or developing interfaith diplomacy. Those are things you can seek advice on elsewhere, or work out through directed discussion within the group. And logical reasoning can help you with all of that.
But the key point I want to make is this...
Logic entails drawing conclusions from evidence and past cases (as much and as many, and as diverse, as possible), and not from the armchair. This is because the logical soundness of any reasoning requires the premises to be true to a high probability. And that is not possible without a lot of data. And data does not come from the armchair. As one can directly observe from the armchair itself; but you can amply confirm it again by examining a vast amount of past data showing how inordinately frequently armchair reasoning gets things wrong.
Moreover, if data always comes from one source or one test condition, you will be missing data from other sources and conditions. Yet any reasoning carried out with missing data will be prone to error. Because, logically, it is a fallacy of false generalization to assume data from one source or condition will be the same from all sources and conditions. You have to actually confirm that it is, before you can logically conclude that it is. Thus, the idea that logic operates in the realm of concepts, and can be carried out independently of evidence, is itself illogical.
This has relevance to how you apply logic to group effectiveness. It means, for example, as I already noted, that you should rely on the tremendous experience base of SSA National. That’s data. Use it.
But here are some other examples...
- Logic also entails doing things in their logical order, and in a logical way. So, for example, make sure your group or organization is clear as to its goals and priorities. Then you can work out how best to achieve those goals. Logically, understanding your goals must precede building a plan to achieve them.
Logic will also tell you that all goals might not be equally valued. So you should work out a resource allocation plan, by some process that will reflect the wishes of the group. One way to do this is to give everyone whose input should be taken into account ten poker chips or marbles, and label one bowl for every goal your group has decided upon, and have everyone distribute their chips or marbles according to their preference. You can then build a pie-chart of how the group as a whole wants your resource distribution to go, by adding up all the marbles or chips in each bowl and counting them, and then deriving their ratios to each other.
The resulting distribution will represent not a precise mathematical mandate, but it will give you a rough idea of how much money, personnel, and time (in personnel-hours) and other resources should be devoted to each goal, since it will reflect the summed preferences of the group (or its board of directors at least). Of course redo this vote at least every year. You could even write an online app that does all this measuring for you.
The logic of evidence then dictates that if you try this and problems arise, redesign the method, or do something else.
- You should also make sure you are being logical about delegation and division of labor. Because past cases show this is a common point of failure.
Avoid having to do everything yourself. Distribute tasks as widely as possible. If personnel are averse to taking on responsibilities, or are lax in fulfilling them, develop an incentive scheme.
People who give their time and other limited resources (and stress tolerance is a limited resource), should feel that they are being rewarded for it. There should be something fun to look forward to. Of course, always make them feel recognized and appreciated for all they do. But from past cases we know that’s not enough. And being logical means taking into account the facts of the world, like how people think and feel, and what motivates them, and what weighs in their calculations as to where to devote their own resources.
- The logical need for data also entails something surprisingly often overlooked: continuity of institutional knowledge. Rather than continually reinventing the wheel, or continually doing things badly or not learning from mistakes or even from successes, keep a group logbook of every problem met and how it was dealt with and what the outcome was. Pass these logbooks on to future leaders of your club. The accumulated knowledge then remains available, and each successor will have more experience to rely on than the last.
Do the same with event manuals: whenever you undertake an endeavor, whether tabling, running a social or educational event, or even a conference, get something like a three-ring binder, label it, and start accumulating lists and procedures, and change and improve them over time. Then the next time you (or, as it will inevitably be, your successor) undertakes the same task, they will have a manual ready, based on past experience, listing all the things they should do, and the steps to take and in what order, and so on. These, too, can include an incident history, of problems encountered and how they were dealt with and what the outcome was.
This way your organization will itself have a memory, and thus can learn from experience, and that knowledge will not continually die out, but be passed on and kept alive, and increased and improved, by each succeeding membership of your group.
Working with Emotions: Not only can logic not function without facts, logic is also not an excuse to ignore emotion. In fact, emotions represent the human brain’s final computed reaction to facts. Emotions thus represent what people feeling those emotions do or don’t want.
For that reason, emotions often have to be factored into your logic as facts and premises. They are in and of themselves reasons for doing things. Logic can then be used to determine how best to do those things.
Logic is thus in the service of emotion. Indeed, if it weren’t for emotions, there would be no desire to use logic or anything to use it for. The production or avoidance of emotional states often constitutes the ultimate end goal of every endeavor. And as long as we do that in conjunction with an unwavering pursuit and acceptance of the truth, we will be doing right by ourselves and our society.
Of course there are cautions we all should heed when analyzing emotions...
Since emotions are the conclusions of a (usually subconscious) reasoning process, like all reasoning processes, they can be reporting an erroneous conclusion. They can also, and usually do, report a correct conclusion.
Emotions are our brain’s way of assessing how facts relate to our values. So an emotional reaction ideally will indicate whether certain facts are friendly or hostile to those values. Which is itself a fact of the world that we definitely need to know and act on. But sometimes emotions do get it wrong. Recognizing this does not always make the emotion go away, but understanding that an emotion is excessive or misplaced can be helpful in determining how it factors into any further reasoning.
For more on this point, see my discussion of the philosophy and science of emotion in Sense and Goodness without God, III.10, pp. 193-209.
An obvious example is hating someone for doing something awful, and then discovering they didn’t do that thing. Your emotion was in that case not a correct evaluation of what happened. And yet because emotions operate at a lower level of our cognition than our reasoning, it can be difficult to get rid of our dislike of that person even when we know it is unfounded.
Logic thus must take facts like these into account. And by facts like these, I mean both the fact that emotions can sometimes be wrong and the fact that even when someone knows their emotion is incorrect, the emotion still might not go away, and thus may still have to be accommodated or worked around.
There are many other respects in which our emotions can be a faulty output and thus can be subject to critical analysis. But this being true should not be taken as warrant to doubt or dispute every emotion. Usually emotions are correct, and already function logically. Doubting or disputing the facts or someone’s logic (when those facts or logic warrant doubt or dispute) should precede any doubting or disputing of emotions. And we still must recognize, even then, that incorrect emotions don’t always just go away.
Emotions can sometimes be so strong that they replace facts. This is commonly seen in denialist ideologies, for example, where an emotional dislike of a thing (gays, feminists, vaccines, environmental responsibility) becomes a motivated reason to deny facts and replace them with myths. Recognizing when this has happened can affect how you interact with people in this state. It will not be enough to defend the facts and debunk the myths (as you normally would, with evidence and reason), although that remains necessary. You must also identify the emotional attachment and call attention to it and ask the person to confront it and analyze it.
This can be done calmly and diplomatically, even in the face of growing anger. Recognize that such anger is often an expression of their cognitive dissonance, actual physical discomfort caused by realizing they are holding two contradictory beliefs, a discomfort they will be inclined to avoid by avoiding thinking about it further, or by becoming delusional. Certainly you can no longer interact with a person rationally if they do that. But not everyone will go there, or stay there. And there is no other way forward for them than confronting that uncomfortable contradiction, between their emotional need to believe a thing, and the evidence against it (or lack of evidence for it).
This is most often an issue when engaging external critics of your organization or its goals and policies. But it can sometimes become an issue in the case of internal problems or disputes. In every case, if you see reason to suspect emotions are displacing facts or reason, ask the person to state how they feel (so it is no longer an unstated premise), and ask them what facts are making them feel that way, and why. Listen to them. And only correct them if you actually have evidence to the contrary of what they’ve claimed the facts to be. Otherwise, you must admit you don’t really know that to be the case. In other words, don’t become a victim of motivated reasoning yourself.
This is a major reason for incorporating personal study of cognitive biases in your critical thinking training and repertoire. Again, my critical thinking resources page will provide you with what you need to get a start on that.
But consider an example. We know we all have a strong cognitive bias toward defending the status quo. Once you realize that you are as much subject to that bias as anyone, you will be more cautious about falling victim to it. And consequently, you will fall victim to it less. And you will thereby make better decisions.
Over the last five years, I saw a strong status quo bias often arising within atheist groups and organizations when dealing with harassment complaints, and a harassment problem generally, within the community. A “don’t rock the boat” principle often outweighed a more logical attention to what was going on. It is certainly easier in a value-neutral sense to try and maintain the status quo than to deal with the difficulties and complexities of investigating and acting on a serious complaint, or developing a workable harassment policy and enforcing it. But it is not at all conducive to your group’s goals to permit harassers to continue unchecked within the group. The very worst example of this in human history is the Catholic Church.
The harassment problem in our community has been dealt with increasingly well since, even while it has been resisted and inconsistent in its progress and has yet further to go. But the progress made has resulted in a large rise in the proportion of women joining, attending, and in consequence leading and contributing to atheist groups and organizations. I’ve been personally witness to this especially at the small local group level nationwide. The change is remarkable. Which entails that, by now drawing in more and more from half of the population previously driven away, we’re on the path toward doubling our numbers, which should be on everyone’s list of organizational goals.
But think about the resistance to this obvious lesson that played out over the last five years, and realize that that same dynamic might arise again in many other ways, not just in the harassment policy debate. For example, in explaining what is still a notable racial disparity in atheist groups. We are not inherently immune to this. If we aren’t constantly checking ourselves, we can all succumb to the status quo bias, in respect to one thing or another. And that’s not being logical.
Because every cognitive bias corresponds to a fallacy of logic. Not necessarily a uniquely named fallacy, but a fallacy all the same. For example, in motivated reasoning, especially in defense of the status quo, you will often be violating the rule of total evidence, e.g. cherry picking evidence or illogically weighting evidence in support of the bias, and not considering all the evidence together, and weighting it objectively. This is why understanding cognitive biases is now an indispensable part of using logic in actual practice, no matter to what purpose.
Know Your Personality and How It Affects Your Cognition: Reasoning is actually largely impacted by personality traits. Logic itself might be clean and perfect, but its application in the human brain is not. A sound application of logic requires being aware of this, and compensating or controlling for the role personality plays in reasoning and judgment.
For example, if you are at all conservative or libertarian (or even a liberal authoritarian), odds are you express a strong Ambiguity Intolerance. This is an innate personality trait that cannot be willed away. It represents ruts that your brain rolls in. In fact your brain feels literally uncomfortable rolling out of them. Like other personality traits it can be altered in degree with effort or influence, but that’s difficult, and a lot of it is genetic and thus immutable. It often has to be compensated for, rather than altered.
Ambiguity Intolerance means you are so uncomfortable with ambiguity that you are strongly driven to eliminate it. Which leads to “Black and White Thinking,” which is a known fallacy of logic. One of the most fundamental fallacies, that of violating the law of excluded middle. In any dichotomy, trichotomy, etc. (any system of categorizing the world; any list of options or alternatives; etc.), there must not be anything “in between” that is excluded. Otherwise all reasoning that then proceeds will be logically invalid.
For example, the ambiguity of things in the world existing in “shades of grey.” Most things exist on a continuum, and are not simply either/or. One is not, for example, either heterosexual or homosexual. That is a false dichotomy. Because one might be bisexual...or asexual (and yes, that is a thing; “we are all sexual beings” may be a comfortable slogan, but it excludes a certain percentage of actual human beings). And one can be these things in varying degrees, e.g. bisexual with a stronger preference for the same sex, but not a disinterest in the opposite sex. The ambiguity intolerant will have a harder time assimilating this information and using it logically.
Another example is intolerance of uncertainty. A strong discomfort with being uncertain can lead to black and white thinking about probabilities. You will tend to assume (or act as if) claims must be either true or false, when in fact there is a whole range of varying degrees of probability for many claims, which for the ambiguity intolerant just won’t compute. You will see this in the reaction often to sexual assault claims: the accused must be either guilty or not guilty; there is no middle option...except that there is: they might be probably but not certainly guilty. The ambiguity intolerant will be so uncomfortable with that, that they will avoid conceding it or coping with it.
Avoiding these nuances and complexities results in black and white thinking, which often leads to authoritarian (usually conservative) thought. When interacting with conservatives, you should keep this in mind and always look for when it is happening, point it out, and compel them to confront it in themselves. And if you have conservative (or liberal authoritarian) tendencies, you should always look for when it is happening in you and confront it in yourself. Because if you don’t, your categorical reasoning will be intrinsically fallacious, and thus not logical.
Similar attention should be devoted to other common personality traits, such as the Big Five. That is not the only way to categorize and analyze personality differences, but it is one of the most useful and widely studied, for which tests exist online that will help you gauge where you are on five major and influential dimensions of personality. You should take one. Because then you will know your own personality quirks and how they affect your cognition. Likewise think about these dimensions when assessing anyone you may be arguing or dealing with, even inside your organization: the more accurately you can place them on the spectrum of the Big Five, the more aware you will be of how their personality factors are influencing their reasoning, and you can then confront them if necessary, or control or compensate for them.
This also means respecting personality types and how they have advantages and disadvantages, so you can accommodate them, and place people in positions best suited to their personality type. Keeping group members comfortable and happy has to be a major organizational goal, because the satisfaction of all other goals depends so crucially on it.
So study what you can of these personality traits and think not just about how certain positions on each spectrum can produce disadvantages to reasoning (which you may have to confront or work around) and to working with someone (which you may have to confront or work around), but also think carefully about how, at the same time, each and every one can produce advantages that you make make use of in your organization. Diversity of personality type is an asset to division of labor and therefore the success of every organization, up to and including the whole of civilization. That is why so much variation exists in human personality.
Those Big Five dimensions of personality are:
I will give just a few examples from each. There are many more ways these personality traits can generate advantages and disadvantages, beyond these.
- Extraversion / Introversion
Low Openness results in not liking change, unconventionality, exploration, or out-of-box thinking. Which can also manifest in not liking diversity (because that requires being open to new experiences). It is common for conservatives to fall on that end of this spectrum, and liberals (especially creatives) the other. Openness happens to be strongly inversely correlated with ambiguity intolerance. It also impacts reasoning by ensuring a lower quantity of data about the world to reason from (often resulting in ethnocentric or egocentric assumptions about the world and how it works). But remember there can be advantages to low openness. It can produce consistency and predictability in a person; they may be comfortable with not doing new things all the time; they like structure; etc.
Low Conscientiousness results in a lower self-discipline and sense of responsibility. Such a person will be messier, less driven, more unreliable, less thoughtful about cleaning up or putting things away or planning an event, and things like that. Whereas someone on the other end of this spectrum will be a highly disciplined and responsible person. This has obvious implications for what responsibilities you assign a person in an organization. The conscientious might be good leaders and organizers and managers. The less conscientious might be best placed in positions where they have one task to perform (ideally one that plays to their other personality traits, like interacting with people), and someone else is responsible for everything else, or directs them and verifies their completion of assigned tasks. This attribute’s effect on reasoning can again be in ensuring a correspondingly high or low production of data to reason from (the difference between doing your homework or due diligence, and not).
Low Extraversion entails Introversion. The extraverted often love talking and socializing and interacting with people. The introverted sometimes do not. The extraverted more often assert themselves and are highly enthusiastic. The introverted more often don’t, and aren’t. It’s important to remember that asking an introvert to do extroverted things can make them feel very uncomfortable; likewise trying to reign in an extravert. You may want to give social and human interaction tasks to extroverts, and assign more cerebral and behind-the-scenes tasks to introverts. For more on the advantages and disadvantages of both, see Sarah Davis’s commentary, In Defense of Extroverts, and Susan Cain’s book Quiet: The Power of Introverts in a World that Won’t Stop Talking. This attribute’s effect on reasoning includes when an extravert incorrectly assumes everyone is like themselves, and thus operates from false premises.
High Agreeableness will make a person happy to oblige requests and value getting along with others. They will be more giving, helpful, considerate, and willing to compromise and keep promises and tell the truth. They will also be more trusting of people (which can produce fallacious reasoning). Low Agreeableness will make someone less giving, less considerate, less helpful, less willing to compromise, less forthcoming, and more willing to abandon a promise, for example one that they deem burdensome. Such people tend to value self-interest over social welfare, and can be suspicious of the motives of others (which can produce fallacious reasoning). At the absolute bottom of this spectrum is sociopathy. Although sociopaths can feign agreeableness when it serves their interests, being feigned it is easily abandoned. Recognizing where someone falls on this spectrum (especially yourself) can make a significant difference in how you choose to interact with other people, and whether and in what ways you do things in empathy with others, like check your privilege, or inordinately trust a source.
High Neuroticism correlates to negative emotional instability. Currently Wikipedia provides a good summary:
Neuroticism is the tendency to experience negative emotions, such as anger, anxiety, or depression. It is sometimes called emotional instability, [and its reverse] emotional stability. ... Those who score high ... are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening, and minor frustrations as hopelessly difficult. Their negative emotional reactions tend to persist for unusually long periods of time, which means they are often in a bad mood. For instance, neuroticism is connected to a pessimistic approach toward work, confidence that work impedes personal relationships, and apparent anxiety linked with work.
The significance of this for cognition is self-evident (e.g. the neurotic’s premises may be overly pessimistic and thus non-factual; the non-neurotic’s premises may be overly optimistic and thus non-factual). But with respect to group dynamics, it’s important to recognize that personality traits are extremely difficult to alter (and even then can be altered only to limited degrees). Thus, it may be important to take into account someone’s neuroticism and plan around it, for their benefit as well as the group’s.
Personality is not immutable, but it is nearly so. In each case, maybe a quarter of a personality trait is adjustable with extensive effort (but that means someone has to have a really strong reason to engage such effort, and not everyone can be expected to be so strongly motivated; success can even require professional therapy to accomplish); another quarter is molded (possibly irreversibly) in childhood; and the remaining half or so is genetic (and thus unalterable).
This is significant, because it means you cannot realistically expect people to be substantially different than they are. Their personality, and its effect on their reasoning and their suitability for certain roles and tasks, simply has to be worked with, because it cannot be changed. And it’s just as important to realize this about yourself, as it is to realize it about others. Sound reasoning requires being aware of how personality may interfere with it, whether your own personality or someone else’s. And such awareness is invaluable to effectively delegating roles and keeping members comfortable.
Also remember that these dimensions of personality are continuums, and that they are not mandates. A particular person is not either totally conscientious or totally not, for example, but more or less, in varying degrees. Moreover, personality is more complex than the Big Five model. The model is just a convenient way to extract some useful information from an otherwise confusing complexity. But that greater complexity is still the reality.
Accordingly, someone may cross boundaries of personality expectations at certain points, or vary their location on a spectrum in conjunction with their mood. And these are also facts of the world to take into account. When thinking about personality in respect to delegating tasks, for instance, always still consult individual wishes and distinctions. For example, introverts sometimes do well at certain extroverted tasks like public speaking, or want to build experience doing them. Or an extrovert might be having a bad day or week (particularly if they score high in neuroticism) and in result momentarily be more comfortable with introversion.
Start with the Particular and then Generalize: Effective reasoning in practical situations (whether in debate or dialogue, or evaluating claims or complaints, or giving advice or making decisions) requires a certain procedure to avoid a common error: try to always focus on particulars, especially concrete real-world examples, before moving to abstractions and generalizations.
Often the worst thing you can do is rant about something in the abstract and never provide a real world example of what you mean. Errors in the step of reasoning from particulars to abstractions are disguised when you do that; and more effective discussion can be had when there is a real-world example to discuss and analyze.
One example of this point is in respect to messaging and public relations. The very popular and beloved podcaster Seth Andrews recently posted a rant telling certain unidentified people getting angry about every little thing to stop being overly sensitive and “grow a thicker skin.” He gave no actual examples of who he was talking about or even an example in concept of what he was talking about. He was unaware that the exact same generalizations are used online to attack and harass women, and to dismiss feminism and any awareness of the harm caused by insensitivity to racial, ableist, and other issues. He would have avoided the appearance of siding with online harassers and right-wingers had he actually given concrete examples (ideally real-world examples, but at least thoughtful or realistic hypotheticals), and then developed his abstractions from those examples.
This is an error repeated so often that once you know to look for it, you’ll be surprised how often it occurs. Ron Lindsay’s disastrous opening speech for the Women in Secularism conference in 2013 was an example, in which he included a list of complaints without providing a single example of what he was actually referring to, allowing his remarks to be interpreted as being intentionally broad, which painted him (and his organization; he was president of, and officially representing, the Center for Inquiry) in a very negative light, contrary to his intent.
This can be a problem well beyond messaging and public relations. In Peter Boghossian’s otherwise worthwhile book A Manual for Creating Atheists, he included a chapter that appeared to attack all feminists as reactionary postmodernist enemies of reason and reality. A close analysis shows he was in fact attacking some unusual subset of academic feminists. Yet he appears to actually believe that this is the norm, or implies that it is.
Notably, at no point in that chapter does Boghossian ever give an example of who he means, or what they actually said about anything. Had he taken the trouble to research, collect and study actual, real-world examples, he would have written a far better and more accurate chapter on the subject. Instead, he reasoned from the abstract, from his imagination, and never checked it against the particulars of the real world. Don’t do that. All abstract argument must begin from a familiarity with the particular examples you are abstracting from.
This extends to all abstractions and generalizations. For example, Boghossian claimed “most” academic feminists fit the profile he constructed. That claim is also a generalization, which requires not just finding examples of academic feminists fitting his profile, but reading widely across all contemporary feminism, to verify that those fitting his profile are the norm, rather than a fringe minority.
Watch yourself, and catch when you do this, when you skip the step of collecting concrete examples and then abstracting from them, and instead just directly launch from an abstract or general premise. And ask yourself, if you aren’t using real-world particulars, why? Is the fact that you can’t find real world examples significant? Did you even check to see if there were examples? Did you verify if they were normal or unusual? Do the actual properties of those concrete cases support the particular generalization you are basing on them?
Real world examples tie you to evidence, and to the way the world really works, so you can test models of reality against reality, rather than what you only imagine in your head. If you need real world examples (because you aren’t already personally or sufficiently familiar with any), admit that this is a state of ignorance you have to responsibly rectify. Talk to people who have examples and can supply you with full accounts (people who have experienced them, worked with them in a real-world way, researched them as a professional historian or psychologist, or what have you), or read widely what such people have written.
I have found that this is extremely important in subjects where many people don’t have enough personal experience for their own assumptions or imagination to be reliable. For example, in the way men often react incredulously to women discussing their experiences with sexual harassment, assault, or rape; in the way cisgender heterosexuals often misunderstand homosexuality and transgenderism; and on on. Don’t generalize. Don’t make abstract claims, neither use them as premises nor reach them as conclusions. Until you have enough familiarity with numerous concrete cases—which you can only reliably learn from people who have direct, real-world experience with them (or others who learned from them). This is a logically necessary step.
Learn to Restate Your Every Premise or Belief as a Probability: There is a fallacious tendency in all of us to reason from “absolute” premises, which we treat as simply true or false. This is not just caused by some personality types. The most commonly taught systems of deductive logic have this assumption built-in. So if you study logic at that level, counter-intuitively, you are actually being trained to reason fallaciously, not logically.
Instead of reasoning this way, which is invalid, restate everything as a probability. Don’t just say God exists or doesn’t exist. Be ready to state how probable you believe it is that any particular god exists. Don’t just say something happened or didn’t happen. Be ready to state how probable you think it is that it happened. Etc.
This of course ensures you will avoid the black-and-white fallacy. But it will also force you to confront the fact that you need to justify your probability assignments. Why do you think it’s that probability, and not more probable or less probable? Are you just picking that number from an unexamined gut reaction, or do you have reasons, and are those reasons good ones?
(Of course be aware that sound reasoning with probability always includes allowed margins of error. So you don’t have to pick an exact probability, but rather an upper and lower probability, beyond which you are certain the actual probability cannot be, but between which you are not certain where the actual probability is. I discuss this procedure, and probability logic generally, in Proving History.)
You will notice, for example, how quickly this approach dissolves most atheism vs. agnosticism debates. Once we are only talking about how probable it is that any god exists, distinctions between “atheist” and “agnostic” all but evaporate; the vocabulary to distinguish them disappears. The same follows for many other disputes: re-phrasing everything in terms of probabilities exposes the real differences and ambiguities in any discussion.
This practice also helps foster more humility, open mindedness, and honesty about how certain you are of things and why. It therefore allows more agreeability to work with others, by reducing your fanaticism or radicalism. It also leads to sounder reasoning in all domains, and creates sharper opportunities to criticize bad ideas. Forcing someone to pin a probability to their claim allows you to start debating where they get that probability from and how. Which gets right to the heart of what logic they are actually using, and what facts (see, for example, the use made of this fact by philosopher Stephen Law).
Understand and Apply Bayesian Rules of Evidence: The next step from there is to realize the lessons of Bayes’ Theorem. I more thoroughly cover that subject elsewhere. Here I just want to emphasize two points, as they relate to all decision making and claim evaluation...
- Understand the Bayesian rule of evidence: you must compare competing hypotheses. All other methods of inductive reasoning are logically invalid. So what are the competing hypotheses? State them fairly, articulate their most credible versions—and then ask how likely all the evidence is on each hypothesis. Include in “all the evidence” any absences of evidence. Exclude nothing relevant.
Then, if those probabilities are different, how different are they? That difference measures how much more likely one explanation is than another, if they start out equally likely. So if the evidence is really improbable on all explanations than one, that one explanation will be far more probable than any other. Key to all of this is recognizing that the probability you need to estimate here is the probability of the evidence, not the hypothesis. And that you need to compare that probability against others, thus comparing competing explanations against each other.
This requires taking alternative explanations seriously. Otherwise, you will fall into the fallacy of confirmation bias: looking for evidence that is expected on one explanation (the one you prefer, or test first), and concluding that if you find it, that explanation is true. That’s false. Because that same evidence might be just as likely on some other explanation—in which case, both explanations are equally likely (if they started out equally likely before that). Likewise, if you only look for corroboratory evidence, you will overlook evidence that goes against it—evidence that is (to some degree) unexpected, and therefore improbable, on that hypothesis.
Because for logical validity you have to consider all the evidence, as well as all the hypotheses (except those that start out absurdly improbable; unless the evidence is also absurdly improbable on every alternative).
In organizational disputes (internal and internal), that means listening to both sides. And then comparing the probability of the evidence on each claim or explanation, and if necessary looking for more evidence—ideally evidence that, if found, would be improbable on one claim or explanation but not the other. Because that’s how you determine which is the more likely.
But this also applies to decision-making regarding marketing strategy. For example, confrontationists and accommodationists: the former argue that a relentless, public, in-your-face challenge to religion advances our cause (makes more atheists, gets more atheists organized, generates more funding, and gets more attention from, and thus more successfully educates, the public); the latter argue a nicer, kinder, more cooperative strategy with religious allies advances our cause.
Of course both can operate side-by-side. And interfaith alliances or activities are possible without having to compromise your values or conceal your objections and concerns regarding false beliefs and faith-based morality. It just requires a mature approach, and a willingness to observe different levels of decorum according to the social situation.
But what some accommodationists argue is that the confrontationist style harms our cause. By the Bayesian rule of evidence, if we define our goal as making more atheists, getting more atheists organized, generating more funding, and getting more attention from (and thus more successfully educating) the public, and the accommodationist claim were true, we should expect these outcomes to have declined when atheism became more publicly confrontationist (as defines New Atheism). That these measures instead remarkably increased precisely when that happened is therefore improbable on this accommodationist thesis, therefore that accommodationist thesis itself is improbable.
Similarly, we should expect the more accommodationist organizations (like the American Humanist Association and the Council for Secular Humanism) to have been more successful on these measures than the more confrontationist organizations (like American Atheists and the Freedom from Religion Foundation). In fact, the opposite has been the case. Indeed, not only do the latter recruit better and get broader funding, it is almost exclusively confrontationists who get featured on national television news programs, a tremendous marketing advantage. That is all very probable on the confrontationist thesis, but improbable on the accommodationist thesis. Confrontationist strategy is therefore more likely to be effective.
That does not mean, however, that all confrontationist strategies are effective. To be effective, a confrontationist approach has to be correct, which means it has to be honest, reasonable, and well informed. Otherwise it becomes self-discrediting (and therefore un-influential or even counter-influential), the more it becomes exposed as in error, or ignorant of relevant facts or nuances. We see evidence of this in the decline in viewership and regard for FOX News and the Republican Party.
In our own community, examples include the tone-deaf and ignorant way feminism and Islam are sometimes confronted and criticized, which turns away a disproportionate number of atheists, and makes atheism look ignorant and awful. The demagogues on the wrong side of these issues claim that it’s the other way around, that their approach and values would work better, but the evidence does not match but in fact reverses that expectation. The evidence of their declining influence and recruitment (especially among more than half the population, which consists of women and minorities), relative to feminist-friendly organizations and organizations that endorse more nuanced criticism of Islam, is very improbable on their thesis, but very probable on the converse thesis.
- Also remember to think about your priors. That means the prior probabilities that you are (often unconsciously) assuming in any given case. This is the initial probability you assign to an explanation or claim, before checking the evidence specifically for or against it.
What priors have you been assuming, and why? Honest priors should be based not on gut assumptions but on what has usually been the case before. Which of two competing kinds of hypothesis, claim, or explanation more typically turns out to be true? Or are they equally likely most of the time?
For example, if a particular person you deal with has been caught lying or distorting the truth several times before, the prior probability that they are lying or distorting the truth in any claim they are making now is higher than for just anyone else making the same claim. Conversely, if you have no such evidence of a person, and if when claims like this are made they usually turn out to be true, then they tend to have high prior probabilities, requiring evidence against them to disbelieve. Which means if there is no evidence against them, they should not be disbelieved. That doesn’t mean you must conclude such a claim is 100% certain. But you should probably conclude it is more probably than not true (which means, greater than 50%).
Sexual harassment claims, for example, have been verified, more often than not, as true (even if there are enough false claims to be uncertain). That affects the prior probability of any future claim being true. And that means a claimant’s testimony weighs more than that of the accused. Because an accused person has a higher prior probability of lying than an accuser does. That may not get you to certainty. But it certainly gets you to warranted suspicion. To dismiss such a claim thus requires more evidence than the mere denial of the accused. Which is why such disputes often become battling character references, as each party wants to demonstrate that they have a record of unusual honesty (or the other a record of dishonesty), in order to get these probabilities turned in their favor. And that is not invalid. Nor is it always the only evidence available bearing on the matter.
In any such analysis, as long as you are obeying the principle of total evidence (and thus not leaving anything out) and you are basing your probabilities on factual experience with past comparable cases (which requires gathering such data or learning from those who have), you will get reasonable probabilities.
Of course, after all that, it may still be difficult to deal with probabilities that fall short of certainty. But we must confront that fact and develop ways to deal with it, ways other than assuming something is certainly true or certainly false. And what’s left will be probable enough to treat as certain, even if still in varying degrees.