The Soviet Union collapsed on December 26, 1991. While signs of decay had been growing, the final collapse happened with unexpected speed. The union disappeared almost overnight and surprisingly few Soviet citizens bothered to defend it. Though it had seemed stable – and persistent – even a few months earlier, it evaporated with barely a whimper.
We could (and probably will) debate for years why the USSR disappeared, I suspect that two cognitive biases — false consensus and preference falsification — were significant contributors. Simply put, many people lied. They said that they supported the regime when, in fact, they did not. When they looked to others for their opinions, those people also lied about their preferences. It seemed that people widely supported the government. They said so, didn’t they? Since a majority seemed to agree, it was reasonable to assume that the government would endure. Best to go with the flow. But when cracks in the edifice appeared, they quickly brought down the entire structure.
Why would people lie about their preferences? Partially because they believed that a consensus existed in the broader community. In such situations, one might lie because of:
False consensus and preference falsification can lead to illogical outcomes such as the Abilene paradox. Nobody wanted to go to Abilene but each person thought that everybody else wanted to go to Abilene … so they all went. A false consensus existed and everybody played along.
We can also see this happening with the risky shift. Groups tend to make riskier decisions than individuals. Why? Oftentimes, it’s because of a false consensus. Each member of the group assumes that other members of the group favor the riskier strategy. Nobody wants to be seen as a wimp, so each member agrees. The decision is settled – everybody wants to do it. This is especially problematic in cultures that emphasize teamwork.
Reinhold Niebuhr may have originated this stream of thought in his book Moral Man and Immoral Society, originally published in 1932. Niebuhr argued that individual morality and social morality were incompatible. We make individual decisions based on our moral understanding. We make collective decisions based on our understanding of what society wants, needs, and demands. More succinctly, “Reason is not the sole basis of moral virtue in men. His social impulses are more deeply rooted than his rational life.”
In 1997, the economist, Timur Kuran, updated this thinking with his book, Private Truths, Public Lies. While Niebuhr focused on the origins of such behavior, Kuran focused more attention on the outcomes. He notes that preference falsification helps preserve “widely disliked structures” and provides an “aura of stability on structures vulnerable to sudden collapse.” Further, “When the support of a policy, tradition, or regime is largely contrived, a minor event may activate a bandwagon that generates massive yet unanticipated change.”
How can we mitigate the effects of such falsification? Like other cognitive biases, I doubt that we can eliminate the bias itself. As Lady Gaga sings, we were born this way. The best we can do is to be aware of the bias and question our decisions, especially when our individual (private) preferences differ from the (perceived) preferences of the group. When someone says, “Let’s go to Abilene” we can ask, “Really? Does anybody really want to go to Abilene?” We might be surprised at the answer.
Let’s say that you’re trying to decide what causes what and keep getting stuck between multiple alternatives. (This is a process known as Inference to the Best Explanation). You’ve put away your cognitive biases and built arguments that are sound and valid. Your friends give you a lot of advice. But you just can’t decide.
Logical razors can help you out of the jam. A razor helps you eliminate choices, which is often as important as creating options in the first place. A razor says, “It’s not likely to be this one ….” By eliminating options, you make your decision easier. You’re more likely to find the best explanation.
Razors don’t use airtight logic so they’re not foolproof. They could conceivably point you away from a solution that actually works. In general, however, they give you a process for working through ideas and eliminating the least probable ones. Here are my favorite razors.
Occam’s razor – among competing explanations, the one that makes the fewest assumptions is most likely to be right. This was the original razor and comes from William of Ockham, a Franciscan friar who lived in the late 13th and early 14th centuries.
Hitchen’s razor – that which can be asserted without evidence can also be dismissed without evidence. We should, of course, ask for evidence to back up a hypothesis. If no evidence exists, we can safely dismiss it.
Hanlon’s razor – never attribute to malice that which can be adequately explained by stupidity. If someone injures you, don’t assume that they did so with malicious intent. It’s more likely that they’re just stupid.
Hume’s razor – if a presumed cause is not sufficient to create an observed effect, we must either eliminate the cause from consideration or show what needs to be added to create the effect.
Adler’s razor – a question that can’t be settled by experimentation is not worth debating.
Logical razors help you scrape away explanations that are possible but not probable. They can help you think more clearly. But they don’t always lead you to a conclusion. At some point, you may have to make Pascal’s Wager.
In her book, Critical Thinking: An Appeal To Reason, Peg Tittle has an interesting and useful way of organizing 15 logical fallacies. Simply put, they’re all irrelevant to the assessment of whether an argument is true or not. Using Tittle’s guidelines, we can quickly sort out what we need to pay attention to and what we can safely ignore.
Though these fallacies are irrelevant to truth, they are very relevant to persuasion. Critical thinking is about discovering the truth; it’s about the present and the past. Persuasion is about the future, where truth has yet to be established. Critical thinking helps us decide what we can be certain of. Persuasion helps us make good choices when we’re uncertain. Critical thinking is about truth; persuasion is about choice. What’s poison to one is often catnip to the other.
With that thought in mind, let’s take a look at Tittle’s 15 irrelevant fallacies. If someone tosses one of these at you in a debate, your response is simple: “That’s irrelevant.”
Chances are that you’ve used some of these fallacies in a debate or argument. Indeed, you may have convinced someone to choose X rather than Y using them. Though these fallacies may be persuasive, it’s useful to remember that they have nothing to do with truth.
If I ask you about the crime rate in your neighborhood, you probably won’t have a clear and precise answer. Instead, you’ll make a guess. What’s the guess based on? Mainly on your memory:
Our estimates, then, are not based on reality but on memory, which of course is often faulty. This is the availability bias. Our probability estimates are biased toward what is readily available to memory.
The broader concept is processing fluency– the ease with which information is processed. In general, people are more likely to judge a statement to be true if it’s easy to process. This is the illusory truth effect– we judge truth based on ease-of-processing rather than objective reality.
It follows that we can manipulate judgment by manipulating processing fluency. Highly fluent information (low cognitive cost) is more likely to be judged true.
We can manipulate processing fluency simply by changing fonts. Information presented in easy-to-read fonts is more likely to be judged true than is information presented in more challenging fonts. (We might surmise that the new Sans Forgetica font has an important effect on processing fluency).
We can also manipulate processing fluency by repeating information. If we’ve seen or heard the information before, it’s easier to process and more likely to be judged true. This is especially the case when we have no prior knowledge about the information.
But what if we do have prior knowledge? Will we search our memory banks to find it? Or will we evaluate truthfulness based on processing fluency? Does knowledge trump fluency or does fluency trump knowledge?
Knowledge-trumps-fluency is known as the Knowledge-Conditional Model. The opposite is the Fluency-Conditional Model. Until recently, many researchers assumed that people would default to the Knowledge-Conditional Model. If we knew something about the information presented, we would retrieve that knowledge and use it to judge the information’s truthfulness. We wouldn’t judge truthfulness based on fluency unless we had no prior knowledge about the information.
A 2015 study by Lisa Fazio et. al. starts to flip this assumption on its head. The article’s title summarizes the finding: “Knowledge Does Not Protect Against Illusory Truth”. The authors write that, “An abundance of empirical work demonstrates that fluency affects judgments of new information, but how does fluency influence the evaluation of information already stored in memory?”
The findings – based on two experiments with 40 students from Duke University – suggest that fluency trumps knowledge. Quoting from the study:
“Reading a statement like ‘A sari is the name of the short pleated skirt worn by Scots’ increased participants later belief that it was true, even if they could correctly answer the question, ‘What is the name of the short pleated skirt worn by Scots?’” (Emphasis added).
The researchers found similar examples of knowledge neglect– “the failure to appropriately apply stored knowledge” — throughout the study. In other words, just because we know something doesn’t mean that we use our knowledge effectively.
Note that knowledge neglect is similar to the many other cognitive biases that influence our judgment. It’s easy (“cognitively inexpensive”) and often leads us to the correct answer. Just like other biases, however, it can also lead us astray. When it does, we are predictably irrational.
Assume, for a moment, that I’m your manager. I call you into my office one day and say, “You’re doing pretty good work … but you’re going to have to get better at shooting free throws on the basketball court. If you want a promotion this year, you’ll need to make at least 75% of your free throws.”
What would you do? Assuming that you don’t resign on the spot, you would probably get a basketball, go to the free throw line, and start practicing free throws (also known as foul shots). Like most skills, you would probably find that your accuracy improves with practice. You might also hire a coach or watch some training videos, but the bottom line is practice, practice, practice.
Now, let’s change the scenario. I call you into my office and say, “You’re doing pretty good work … but you’re going to have to get better at creating ideas. If you want a promotion this year, you’ll need to increased the number of good ideas you generate by at least 75%.”
Now what? Well … I’d suggest that you start practicing the art of creating good ideas. In fact, I’d suggest that it’s not very different from practicing the art of shooting free throws.
But shooting free throws and creating ideas seem to be very different processes. Here’s how they feel:
The two activities seem very different but, actually, they’re not. In both cases, you’re doing the work. With free throws, you readily recognize what you’re doing. With ideas, you don’t. Free throws happen in your conscious mind, also known as System 2. New ideas, on the other hand, happen below the level of consciousness, in System 1. When System 1 works up an idea, it pops it into System 2 and you become aware of it.
We understand how to practice something in System 2 – we’re aware of our activity. But how do we practice in System 1? How can we practice something that we’re not aware of?
We think of our mind as controlling our body. But, as Amy Cuddy has pointed out, our bodily activities also influence our mental states. If we make ourselves big, we grow more confident. If we smile, our mood brightens.
So how do we use our bodies to teach our brains to have good ideas? First, we need to observe ourselves. What were you doing the last time you had a good idea? I’ve noticed that most of my good ideas pop into my head when I’m out for a walk. When I’m stuck on a difficult problem, I recognize that I need a good idea. I quit what I’m doing and go for a walk. Oftentimes, it works – my System 1 generates an idea and pops it into System 2.
In my critical thinking classes, I ask my students to raise their hands if they have ever in their lives had a good idea. All hands go up. Everybody has the ability to create good ideas. The question is practice.
Then I ask my students what they were doing the last time they had a good idea. The list includes: out for a walk, driving, riding in a car, bus, or train (but not an airplane), taking a shower, drifting off to sleep, and bicycling.
I also ask them what activities don’t generate good ideas. The list includes: when they’re stressed, highly focused, multitasking, overly tired, overly busy, or sitting in meetings.
So how do we practice the art of having good ideas? By doing more of those activities that generate good ideas (and fewer of those that don’t). The most productive activities – like walking – seem to occupy part of our attention while leaving much of our brainpower free to wander somewhat aimlessly. Our bodily activity influences and stimulates our System 1. The result is often a good idea.
Is that perfectly clear? Good. I’m going for a walk.