Strategy. Innovation. Brand.

cognitive biases

How Many Cognitive Biases Are There?

Source: Wikipedia — List_of_cognitive_biases

In my critical thinking class, we begin by studying 17 cognitive biases that are drawn from Peter Facione’s excellent textbook, Think Critically. (I’ve also summarized these here, here, here, and here). I like the way Facione organizes and describes the major biases. His work is very teachable. And 17 is a manageable number of biases to teach and discuss.

While the 17 biases provide a good introduction to the topic, there are more biases that we need to be aware of. For instance, there’s the survivorship bias. Then there’s swimmer’s body fallacy. And the Ikea effect. And the self-herding bias. And don’t forget the fallacy fallacy. How many biases are there in total? Well, it depends on who’s counting and how many hairs we’d like to split. One author says there are 25. Another suggests that there are 53. Whatever the precise number, there are enough cognitive biases that leading consulting firms like McKinsey now have “debiasing” practices to help their clients make better decisions.

The ultimate list of cognitive biases probably comes from Wikipedia, which identifies 104 biases. (Click here and here). Frankly, I think Wikipedia is splitting hairs. But I do like the way Wikipedia organizes the various biases into four major categories. The categorization helps us think about how biases arise and, therefore, how we might overcome them. The four categories are:

1) Biases that arise from too much information – examples include: We notice things already primed in memory. We notice (and remember) vivid or bizarre events. We notice (and attend to) details that confirm our beliefs.

2) Not enough meaning – examples include: We fill in blanks from stereotypes and prior experience. We conclude that things that we’re familiar with are better in some regard than things we’re not familiar with. We calculate risk based on what we remember (and we remember vivid or bizarre events).

3) How we remember – examples include: We reduce events (and memories of events) to the key elements. We edit memories after the fact. We conflate memories that happened at similar times even though in different places or that happened in the same place even though at different times, … or with the same people, etc.

4) The need to act fast – examples include: We favor simple options with more complete information over more complex options with less complete information. Inertia – if we’ve started something, we continue to pursue it rather than changing to a different option.

It’s hard to keep 17 things in mind, much less 104. But we can keep four things in mind. I find that these four categories are useful because, as I make decisions, I can ask myself simple questions, like: “Hmmm, am I suffering from too much information or not enough meaning?” I can remember these categories and carry them with me. The result is often a better decision.

 

Factory-Installed Biases

Born this way.

In my critical thinking class, we investigate a couple of dozen cognitive biases — fallacies in the way our brains process information and reach decisions. These include the confirmation bias, the availability bias, the survivorship bias, and many more. I call these factory-installed biases – we’re born this way.

But we haven’t asked the question behind the biases: why are we born that way? What’s the point of thinking fallaciously? From an evolutionary perspective, why haven’t these biases been bred out of us? After all, what’s the benefit of being born with, say, the confirmation bias?

Elizabeth Kolbert has just published an interesting article in The New Yorker that helps answer some of these questions. (Click here). The article reviews three new books about how we think:

  • The Enigma of Reason by Hugo Mercier and Dan Sperber
  • The Knowledge Illusion: Why We Never Think Alone by Steve Sloman and Philip Fernbach
  • Denying To The Grave: Why We Ignore The Facts That Will Save Us by Jack Gorman and Sara Gorman

Kolbert writes that the basic idea that ties these books together is sociability as opposed to logic. Our brains didn’t evolve to be logical. They evolved to help us be more sociable. Here’s how Kolbert explains it:

Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.”

So, the confirmation bias, for instance, doesn’t help us make good, logical decisions but it does help us cooperate with others. If you say something that confirms what I already believe, I’ll accept your wisdom and think more highly of you. This helps us confirm our alliance to each other and unifies our group. I know I can trust you because you see the world the same way I do.

If, on the other hand, someone in another group says something that disconfirms my belief, I know the she doesn’t agree with me. She doesn’t see the world the same way I do. I don’t see this as a logical challenge but as a social challenge. I doubt that I can work effectively with her. Rather than checking my facts, I check her off my list of trusted cooperators. An us-versus-them dynamic develops, which solidifies cooperation in my group.

Mercier and Sperber, in fact, change the name of the confirmation bias to the “myside bias”. I cooperate with my side. I don’t cooperate with people who don’t confirm my side.

Why wouldn’t the confirmation/myside bias have gone away? Kolbert quotes Mercier and Sperber: ““This is one of many cases in which the environment changed too quickly for natural selection to catch up.” All we have to do is wait 1,000 generations or so. Or maybe we can program artificial intelligence to solve the problem.

Male Chauvinist Machines

Yanks win last night?

Yanks win last night?

Do men and women think differently? If they do, who should develop artificial intelligence? As we develop AI, should we target “feminine” intelligence or “masculine” intelligence? Do we have enough imagination to create a non-gendered intelligence? What would that look like?

First of all, do the genders think differently? According to Scientific American, our brains are wired differently. As you know, our brains have two hemispheres. Male brains have more connections within each hemisphere as compared to female brains. By contrast, female brains have more connections between hemispheres.

Men, on average, are better at connecting the front of the brain with the back of the brain while women are better at connecting left and right hemispheres. How do these differences influence our behavior? According to the article, “…male brains may be optimized for motor skills, and female brains may be optimized for combining analytical and intuitive thinking.”

Women and men also have different proportions of white and gray matter in their brains. (Click here). Gray matter is “…primarily associated with processing and cognition…” while white matter handles connectivity. The two genders are the same (on average) in general intelligence, so the differences in the gray/white mix suggest that there are two different ways to get to the same result. (Click here). Women seem to do better at integrating information and with language skills in general. Men seem to do better with “local processing” tasks like mathematics.

Do differences in function drive the difference in structure or vice-versa? Hard to tell. Men have a higher percentage of white matter and also have somewhat larger brains compared to women. Perhaps men need more white matter to make connections over longer distances in their larger brains. Women have smaller heads and may need less white matter to make the necessary connections — just like a smaller house would need less electrical wire to connect everything. Thus, a larger proportion of the female brain can be given over to gray matter.

So men and women think differently. That’s not such a surprise. As we look ahead to artificial intelligence, which model should we choose? Should we emphasize language skills, similar to the female brain? Or local processing skills, similar to the male brain? Should we emphasize processing power or information integration?

Perhaps we could do both, but I wonder how realistic that is. I try to imagine what it would be like to think as a woman but I find it difficult to wrap my head around the concept. As a feminist might say, I just don’t get it. I have to imagine that a woman trying to think like a man would encounter similar difficulties.

Perhaps the best way to develop AI would involve mixed teams of men and women. Each gender could contribute what it does best. But that’s not what’s happening today. As Jack Clark points out, “Artificial Intelligence Has A “Sea of Dudes’ Problem”. Clark is mainly writing about data sets, which developers use to teach machines about the world. If men choose all the data sets, the resulting artificial intelligence will be biased in the same ways that men are. Yet male developers of AI outnumber females by a margin of about eight-to-one. Without more women, we run the risk of creating male chauvinist machines. I can just hear my women friends saying, “Oh my God, no!”

McKinsey and The Decision Villains

Just roll the dice.

Just roll the dice.

In their book, Decisive, the Heath brothers write that there are four major villains of decision making.

Narrow framing – we miss alternatives and options because we frame the possibilities narrowly. We don’t see the big picture.

Confirmation bias – we collect and attend to self-serving information that reinforces what we already believe. Conversely, we tend to ignore (or never see) information that contradicts our preconceived notions.

Short-term emotion – we get wrapped up in the dynamics of the moment and make premature commitments.

Overconfidence – we think we have more control over the future than we really do.

A recent article in the McKinsey Quarterly notes that many “bad choices” in business result not just from bad luck but also from “cognitive and behavioral biases”. The authors argue that executives fall prey to their own biases and may not recognize when “debiasing” techniques need to be applied. In other words, executives (just like the rest of us) make faulty assumptions without realizing it.

Though the McKinsey researchers don’t reference the Heath brothers’ book, they focus on two of the four villains: the confirmation bias and overconfidence. They estimate that these two villains are involved in roughly 75 percent of corporate decisions.

The authors quickly summarize a few of the debiasing techniques – premortems, devil’s advocates, scenario planning, war games etc. – and suggest that these are quite appropriate for the big decisions of the corporate world. But what about everyday, bread-and-butter decisions? For these, the authors suggest a quick checklist approach is more appropriate.

The authors provide two checklists, one for each bias. The checklist for confirmation bias asks questions like (slightly modified here):

Have the decision-makers assembled a diverse team?

Have they discussed their proposal with someone who would certainly disagree with it?

Have they considered at least one plausible alternative?

The checklist for overconfidence includes questions like these:

What are the decision’s two most important side effects that might negatively affect its outcome? (This question is asked at three levels of abstraction: 1) inside the company; 2) inside the company’s industry; 3) in the macro-environment).

Answering these questions leads to a matrix that suggests the appropriate course of action. There are four possible outcomes:

Decide – “the process that led to [the] decision appears to have included safeguards against both confirmation bias and overconfidence.”

Reach out – the process has been tested for downside risk but may still be based on overly narrow assumptions. To use the Heath brothers’ terminology, the decision makers should widen their options with techniques like the vanishing option test.

Stress test – the decision process probably overcomes the confirmation bias but may depend on overconfident assumptions. Decision makers need to challenge these assumptions using techniques like premortems and devil’s advocates.

Reconsider – the decision process is open to both the conformation bias and overconfidence. Time to re-boot the process.

The McKinsey article covers much of the same territory covered by the Heath brothers. Still, it provides a handy checklist for recognizing biases and assumptions that often go unnoticed. It helps us bring subconscious biases to conscious attention. In Daniel Kahneman’s terminology, it moves the decision from System 1 to System 2. Now let’s ask the McKinsey researchers to do the same for the two remaining villains: narrow framing and short-term emotion.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives