Strategy. Innovation. Brand.

cognitive biases

Coronavirus and Critical Thinking

Let me ask you a simple question: What’s the crime rate in your neighborhood?

To answer the question, you’ll need to do a fair amount of research. You might dig through police reports, census data, city government publications, and so on. It’s a lot of work.

But our brains don’t like to work. As Daniel Kahneman writes, “Thinking is to humans as swimming is to cats. They can do it, but they prefer not to.”

So, instead of answering the original question, we substitute a simpler question: How much crime can I remember in my neighborhood?

If we can remember a lot of crime – if it’s top of mind — we’ll guess that our neighborhood has a high crime rate. If we can’t remember much crime, we’ll guess that we have a low crime rate. We use our memory as a proxy for reality. It’s simple and probably not wholly wrong. It’s good enough.

Let me ask you another simple question: How dangerous is coronavirus?

It’s a tough question. We can’t possibly know the “right” answer. Even the experts can’t figure it out. So, how does our mind work on a tough question like this?

First, we use our memory as a proxy for reality. How top of mind is coronavirus? How available is it to our memory? (This, as you might guess, is known as the availability bias). Our media is saturated with stories about coronavirus. We see it every day. It’s easy to recall from memory. Must be a big deal.

Second, the media will continue to focus on coronavirus for several more months (at least). In the beginning, the media focused on the disease itself. Now, the media is more likely to focus on secondary effects – travel restrictions, quarantines, etc. Soon, the media will focus on reactions to the virus. Protesters will march on Washington demanding immediate action to protect us. The media will cover it.

The media activity is known as an availability cascade. The story keeps cascading into new stories and new angles on the same old story. The cascade keeps the story top of mind. It remains easily available to us. When was the last time we had a huge availability cascade? Think back to 2014 and the Ebola crisis. Sound familiar?

Third, our minds will consider how vivid the information is. How scary is it? How creepy? We remember vicious or horrific crimes much better than we remember mundane crimes like Saturday night stickups. How vivid is coronavirus? We see pictures everyday of workers in hazmat suits. It’s vivid.

Fourth, what are other people doing? When we don’t know how to act in a given situation, we look for cues from our fellow humans. What do we see today? Pictures of empty streets and convention centers. We read that Chinatown in New York is empty of tourists. People are afraid. If they’re afraid, we probably should be, too.

Fifth, how novel is the situation? We’re much more afraid of devils we don’t know than of devils that we do know. The coronavirus – like the Ebola virus before it – is new and, therefore, unknowable. Health experts can reassure us but, deep in our heart of hearts, we know that nobody knows. We can easily imagine that it’s the worst-case scenario. It could be the end of life as we know it.

Sixth, is it controllable? We want to think that we can control the world around us. We study history because we think that knowing the past will help us control the future. If something scary is out of our control, we will spare no expense to bring it back under control. Even a small scare – like the Three Mile Island incident – can produce a huge reaction. At times, it seems that the cure may be worse than the disease.

What to do? First, let’s apply some contextual thinking – both current and historical.

  • Current context — compared to other current threats, how bad is coronavirus? As of early February 2020, coronavirus has killed slightly more than 1,100 people worldwide. This compares to 80,000 deaths from plain old ordinary flu in the 2017-2018 flu season in the United States alone.
  • Historical context – compared to previous outbreaks of novel viruses, how bad is coronavirus? Here we might compare coronavirus to Ebola and SARS. To date, coronavirus has killed more people than either Ebola or SARS. So, perhaps coronavirus is more dangerous. But let’s also remember how the stories or Ebola and SARS developed and how they ended. Both started with an outcry that led to near panic. Then the story stopped. We regained control. It’s likely that the coronavirus story will end the same way.

So, what to do? You’re much more likely to succumb to plain old ordinary flu than you are to be infected by coronavirus. So, get a flu shot. Then do what the old British posters from World War II told us all to do: Keep calm and carry on.

Spotting Random Fraud

Let’s say we have an election and 20 precincts report their results. Here’s the total number of votes cast in each precinct:

3271                            2987                2769                            3389

2587                            3266                4022                            4231

3779                            3378                4388                            5327

2964                            2864                2676                            3653

3453                            4156                3668                            4218

Why would you suspect fraud?

Before you answer that, let me ask you another question. Would you please write down a random number between one and 20?

Asking you to write down a random number seems like an innocent request. But the word “random” invokes some unusual behavior. It turns out that we all have in our minds a definition of “random” that’s not quite … well, random. Does the number 17 seem random to you? Most people would say, “Sure. That’s pretty random.” Do the numbers 10 and 15 seem random to you? Most people would say, “No. Those aren’t random numbers.”

Why do we have a bias against 10 and 15? Why do we say they aren’t random? Probably because we often round our numbers so that they end in zeros or fives. We say, “I’ll see you in five minutes (or 10 minutes or 15 minutes)”. We rarely say, “I’ll see you in 17 minutes”. In casual conversation, we use numbers that end in zeros or fives far more often than we use numbers that end in other digits. Because we use them frequently, they seem familiar, not random.

So, if we want numbers to look random – as we might in a fraud – we’ll create numbers that fit our assumptions of what random numbers look like. We’ll under-represent numbers that end in fives and zeros and over-represent numbers that end in sevens or threes or nines. But if the numbers are truly random, then all the digits zero through nine should be equally represented.

Now look again at the reported numbers from the precincts. What’s odd is what’s missing. None of the twenty numbers end in five or zero. But if the numbers were truly random, we would expect – in a list of 20 — at least two numbers to end in zero and two more to end in five. The precinct numbers are suspicious. Somebody was trying to make the numbers look random but tripped over their own assumptions about what random numbers look like.

Moral of the story? If you’re going to cheat, check your assumptions at the door.

By the way, I ask my students to write down a random number between one and 20. The most frequent number is 17, followed by 3, 13, 7, and 9. There is a strong bias towards odd numbers and whole numbers. No one has ever written down a number with a fraction.

Knowledge Neglect — Scots In Saris

Sari!

If I ask you about the crime rate in your neighborhood, you probably won’t have a clear and precise answer. Instead, you’ll make a guess. What’s the guess based on? Mainly on your memory:

  • If you can easily remember a violent crime, you’ll guess that the crime rate is high.
  • If you can’t easily remember such a crime, you’ll guess a much lower rate.

Our estimates, then, are not based on reality but on memory, which of course is often faulty. This is the availability bias. Our probability estimates are biased toward what is readily available to memory.

The broader concept is processing fluency– the ease with which information is processed. In general, people are more likely to judge a statement to be true if it’s easy to process. This is the illusory truth effect– we judge truth based on ease-of-processing rather than objective reality.

It follows that we can manipulate judgment by manipulating processing fluency. Highly fluent information (low cognitive cost) is more likely to be judged true.

We can manipulate processing fluency simply by changing fonts. Information presented in easy-to-read fonts is more likely to be judged true than is information presented in more challenging fonts. (We might surmise that the new Sans Forgetica font has an important effect on processing fluency).

We can also manipulate processing fluency by repeating information. If we’ve seen or heard the information before, it’s easier to process and more likely to be judged true. This is especially the case when we have no prior knowledge about the information.

But what if we do have prior knowledge? Will we search our memory banks to find it? Or will we evaluate truthfulness based on processing fluency? Does knowledge trump fluency or does fluency trump knowledge?

Knowledge-trumps-fluency is known as the Knowledge-Conditional Model. The opposite is the Fluency-Conditional Model. Until recently, many researchers assumed that people would default to the Knowledge-Conditional Model. If we knew something about the information presented, we would retrieve that knowledge and use it to judge the information’s truthfulness. We wouldn’t judge truthfulness based on fluency unless we had no prior knowledge about the information.

A 2015 study by Lisa Fazio et. al. starts to flip this assumption on its head. The article’s title summarizes the finding: “Knowledge Does Not Protect Against Illusory Truth”. The authors write that, “An abundance of empirical work demonstrates that fluency affects judgments of new information, but how does fluency influence the evaluation of information already stored in memory?”

The findings – based on two experiments with 40 students from Duke University – suggest that fluency trumps knowledge. Quoting from the study:

“Reading a statement like ‘A sari is the name of the short pleated skirt worn by Scots’ increased participants later belief that it was true, even if they could correctly answer the question, ‘What is the name of the short pleated skirt worn by Scots?’” (Emphasis added).

The researchers found similar examples of knowledge neglect– “the failure to appropriately apply stored knowledge” — throughout the study. In other words, just because we know something doesn’t mean that we use our knowledge effectively.

Note that knowledge neglect is similar to the many other cognitive biases that influence our judgment. It’s easy (“cognitively inexpensive”) and often leads us to the correct answer.  Just like other biases, however, it can also lead us astray. When it does, we are predictably irrational.

Concept Creep and Pessimism

What’s your definition of blue?

My friend, Andy, once taught in the Semester at Sea program. The program has an ocean-going ship and brings undergraduates together for a semester of sea travel, classes, and port calls. Andy told me that he was fascinated watching these youngsters come together and form in-groups and out-groups. The cliques were fairly stable while the ship was at sea but more fluid when it was in port.

Andy told me, for instance, that some of the women described some of the men as “Ship cute, shore ugly.” The very concept of “cute” was flexible and depended entirely on context. When at sea, a limited supply of men caused the “cute” definition to expand. In port, with more men available, the definition of cute became more stringent.

We usually think of concepts as more-or-less fixed. They’re unlike other processes that expand over time. The military, for instance is familiar with “mission creep” – a mission may start with small and well-defined objectives but they often grow over time. Similarly, software developers understand “feature creep” – new features are added as the software is developed.  But do concepts creep? The Semester at Sea example suggests that they do, depending on prevalence.

This was also the finding of a research paper published in a recent issue of Science magazine. (Click here). Led by David Levari, the researchers showed that “… people often respond to decreases in the prevalence of a stimulus by expanding their concept of it.” In the Semester at Sea example, as the stimulus (men) decreases, the concept of cute expands. According to Levari, et. al., this is a common phenomenon and not just related to hormonal youngsters isolated on a ship.

The researchers started with a very neutral stimulus – the color of dots. They presented 1,000 dots ranging in color from purple to blue and asked participants to identify the blue ones. They repeated the trial several hundred times. Participants were remarkably consistent in each trial. Dots identified as blue in the first trials were still identified as blue in the last trials.

The researchers then repeated the trials while reducing the number of blue dots. Would participants in the second set of trials – with decreased stimulus — expand their definition of “blue” and identify dots as blue that they had originally identified as purple? Indeed, they would. In fact, the number of purple-to-blue “crossover” dots was remarkably consistent through numerous trials.

The researchers also varied the instructions for the comparisons. In the first study, participants were told that the number of blue dots “might change” in the second pass. In a second study, participants were told that the number of blue dots would “definitely decrease.” In a third study, participants were instructed to “be consistent” and were offered monetary rewards for doing so. In some studies the number of blue dots declined gradually. In others, the blue dots decreased abruptly. These procedural changes had virtually no impact on the results. In all cases, declining numbers of blue dots resulted in an expanded definition of “blue”.

Does concept creep extend beyond dots? The researchers did similar trials with 800 images of human faces that had been rated on a continuum from “very threatening” to “not very threatening.” The results were essentially the same as the dot studies. When the researchers reduced the number of threatening faces, participants expanded their definition of “threatening.”

All these tests used visual stimuli. Does concept creep also apply to nonvisual stimuli? To test this, the researchers asked participants to evaluate whether 240 research proposals were ethical or not. The results were essentially the same. When the participants saw many unethical proposals, their definition of ethics was fairly stringent. When they saw fewer unethical proposals, their definition expanded.

It seems then that “prevalence-induced concept change” – as the researchers label it – is probably common in human behavior. Could this help explain some of the pessimism in today’s world? For example, numerous sources verify that the crime rate in the United States has declined over the past two decade. (See herehere, and here, for example). Yet many people believe that the crime rate has soared. Could concept creep be part of the problem? It certainly seems likely.

Yet again, our perception of reality differs from actual reality. Like other cognitive biases, concept creep distorts our perception in predictable ways. As the number of stimuli – from cute to blue to ethical – goes down, we expand our definition of what the concept actually means. As “bad news” decreases, we expand our definition of what is “bad”. No wonder we’re pessimistic.

How Many Cognitive Biases Are There?

Source: Wikipedia — List_of_cognitive_biases

In my critical thinking class, we begin by studying 17 cognitive biases that are drawn from Peter Facione’s excellent textbook, Think Critically. (I’ve also summarized these here, here, here, and here). I like the way Facione organizes and describes the major biases. His work is very teachable. And 17 is a manageable number of biases to teach and discuss.

While the 17 biases provide a good introduction to the topic, there are more biases that we need to be aware of. For instance, there’s the survivorship bias. Then there’s swimmer’s body fallacy. And the Ikea effect. And the self-herding bias. And don’t forget the fallacy fallacy. How many biases are there in total? Well, it depends on who’s counting and how many hairs we’d like to split. One author says there are 25. Another suggests that there are 53. Whatever the precise number, there are enough cognitive biases that leading consulting firms like McKinsey now have “debiasing” practices to help their clients make better decisions.

The ultimate list of cognitive biases probably comes from Wikipedia, which identifies 104 biases. (Click here and here). Frankly, I think Wikipedia is splitting hairs. But I do like the way Wikipedia organizes the various biases into four major categories. The categorization helps us think about how biases arise and, therefore, how we might overcome them. The four categories are:

1) Biases that arise from too much information – examples include: We notice things already primed in memory. We notice (and remember) vivid or bizarre events. We notice (and attend to) details that confirm our beliefs.

2) Not enough meaning – examples include: We fill in blanks from stereotypes and prior experience. We conclude that things that we’re familiar with are better in some regard than things we’re not familiar with. We calculate risk based on what we remember (and we remember vivid or bizarre events).

3) How we remember – examples include: We reduce events (and memories of events) to the key elements. We edit memories after the fact. We conflate memories that happened at similar times even though in different places or that happened in the same place even though at different times, … or with the same people, etc.

4) The need to act fast – examples include: We favor simple options with more complete information over more complex options with less complete information. Inertia – if we’ve started something, we continue to pursue it rather than changing to a different option.

It’s hard to keep 17 things in mind, much less 104. But we can keep four things in mind. I find that these four categories are useful because, as I make decisions, I can ask myself simple questions, like: “Hmmm, am I suffering from too much information or not enough meaning?” I can remember these categories and carry them with me. The result is often a better decision.

 

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives