Strategy. Innovation. Brand.

cognitive bias

False Consensus and Preference Falsification

Bye bye!

The Soviet Union collapsed on December 26, 1991. While signs of decay had been growing, the final collapse happened with unexpected speed. The union disappeared almost overnight and surprisingly few Soviet citizens bothered to defend it. Though it had seemed stable – and persistent – even a few months earlier, it evaporated with barely a whimper.

We could (and probably will) debate for years why the USSR disappeared, I suspect that two cognitive biases — false consensus and preference falsification — were significant contributors. Simply put, many people lied. They said that they supported the regime when, in fact, they did not. When they looked to others for their opinions, those people also lied about their preferences. It seemed that people widely supported the government. They said so, didn’t they? Since a majority seemed to agree, it was reasonable to assume that the government would endure. Best to go with the flow. But when cracks in the edifice appeared, they quickly brought down the entire structure.

Why would people lie about their preferences? Partially because they believed that a consensus existed in the broader community. In such situations, one might lie because of:

  • A desire to stay in step with majority opinion — this is essentially a sociocentric bias. We enhance our self-esteem by agreeing with the majority.
  • A desire to remain politically correct – this may be fear-induced, especially in authoritarian regimes.
  • Lack of information – when information is scarce, we may assume that the majority (as we perceive it) is probably right. We should go along.

False consensus and preference falsification can lead to illogical outcomes such as the Abilene paradox. Nobody wanted to go to Abilene but each person thought that everybody else wanted to go to Abilene … so they all went. A false consensus existed and everybody played along.

We can also see this happening with the risky shift. Groups tend to make riskier decisions than individuals. Why? Oftentimes, it’s because of a false consensus. Each member of the group assumes that other members of the group favor the riskier strategy. Nobody wants to be seen as a wimp, so each member agrees. The decision is settled – everybody wants to do it. This is especially problematic in cultures that emphasize teamwork.

Reinhold Niebuhr may have originated this stream of thought in his book Moral Man and Immoral Society, originally published in 1932. Niebuhr argued that individual morality and social morality were incompatible. We make individual decisions based on our moral understanding. We make collective decisions based on our understanding of what society wants, needs, and demands. More succinctly, Reason is not the sole basis of moral virtue in men. His social impulses are more deeply rooted than his rational life.”

In 1997, the economist, Timur Kuran, updated this thinking with his book, Private Truths, Public Lies. While Niebuhr focused on the origins of such behavior, Kuran focused more attention on the outcomes. He notes that preference falsification helps preserve “widely disliked structures” and provides an “aura of stability on structures vulnerable to sudden collapse.” Further, “When the support of a policy, tradition, or regime is largely contrived, a minor event may activate a bandwagon that generates massive yet unanticipated change.”

How can we mitigate the effects of such falsification? Like other cognitive biases, I doubt that we can eliminate the bias itself. As Lady Gaga sings, we were born this way. The best we can do is to be aware of the bias and question our decisions, especially when our individual (private) preferences differ from the (perceived) preferences of the group. When someone says, “Let’s go to Abilene” we can ask, “Really? Does anybody really want to go to Abilene?” We might be surprised at the answer.

 

 

Trumping The Press

I need my skeptical spectacles.

Is Donald Trump vilifying the press or playing the press?

Take a recent example: someone leaked a draft memo to the Washington Post suggesting that the government will activate 100,000 National Guard troops to arrest illegal aliens. The Post printed the story and the reactions from both sides were predictable. The left was outraged that the government might do such a thing. The right pitched a hissy fit over leaks.

But here’s another way to interpret the story. The Trump administration wants to rid the country of approximately 11 million illegal aliens. Deporting them all would be a difficult, expensive, and lengthy task. So here’s another way: scare at least some of them into leaving on their own. The National Guard story – though false – undoubtedly started rumors in immigrant neighborhoods that the Feds were about to launch massive sweeps. Better to depart sooner rather than later.

Seen in this light, the Trump administration wins in two ways. First, the story sows fear in immigrant communities and may lead to “self-deportations”. Second, the administration continues to build the narrative that the media promotes fake news and is the enemy of the people.

Another tactic to control the conversation is what academics call availability cascades. We humans estimate how risky something is based on information that’s available to us. An availability cascade makes a cascade of information – about one and only one topic – readily available to us.

The Ebola scare of 2014 provides a good example. Somebody gets sick with a dread disease. The press writes vivid stories about the illness and makes grim images easily available to us. It’s top of mind. Then people push the government to “do something” about the menace. The press writes about that. Then the government actually does something. The press writes about that. Then people protest what the government has done. The press writes about that. Soon, the entire world seems to be chattering about Ebola. If everybody’s talking about it, it must be dangerous.

The Trump administration creates an availability cascade when it lures the press into writing more about Islamic terrorism. The administration has accused the press of underreporting terrorist incidents. In response, the press has written numerous articles pointing out just how many stories they’ve written on terrorist incidents. The net effect? Terrorism is in the headlines every day. Everybody is talking about it. It must be dangerous.

Even fake news can help keep availability cascades in the headlines. The administration makes a far-fetched claim and the press naturally wants to set the record straight. By doing so, the press adds fuel to the availability fire. The story lingers on. As long as the press plays along, the administration will keep creating alternative facts. Think of it as the media equivalent of rope-a-dope.

Trump’s obsession with himself creates another availability cascade. Trump regularly talks about himself and his accomplishments – how smart he is, how many electoral votes he won, and so on. He often repeats himself; the news is no longer new. Yet the press keeps writing about it. Apparently, they want to show how self-obsessed he is. But the practical effect is that Trump dominates the headlines very day. If everybody is chattering about him, he must be very powerful.

Bernard Cohen wrote that, “The press may not be successful … in telling people what to think, but it is stunningly successful in telling them what to think about.” The Trump administration is using the press to frame the discussion and tell us what to think about. Perhaps it’s time for the press to change the subject.

Heard of Self-Herding?

Should've gotten Grover's Grind.

Should’ve gotten Grover’s Grind.

How many times do you need to make the same decision?

Let’s say that, on your drive to work, there are two drive-through coffee shops: Grover’s Grind and The Freckled Beauty. You try each and decide that you prefer the mocha delight from The Freckled Beauty. Why would you ever make that same decision again? It’s more efficient to make the decision once and repeat the behavior as often as needed.

Let’s change the context. You’re walking down a busy street in a big city when you see a cluster of, say, six people. They’re all looking upward and pointing to a tall building. Chances are that you’ll slow down and look up as well. The cluster of people has “herded” you into behaving the same way they behave.

Herding affects us in many ways. Teenagers wear essentially the same clothing because they want to be part of the same herd. College professors dress like college professors. Similarly, if we’re surrounded by liberals, we tend to lean liberal. If surrounded by conservatives, we tend to lean conservative. We sort ourselves into different herds based on appearances, clothing, lifestyles, political position, religion and so on.

Herding is essentially a cognitive bias. Instead of thinking through a decision and using logic to reach an advantageous conclusion, we use a shortcut (also known as a heuristic). We let the herd think for us. If it’s good enough for them, it’s probably good enough for me.

Like most cognitive biases, herding leads us to good conclusions much of the time … but not always. When it goes wrong, it does so in predictable ways. As Dan Ariely says in the title of his book, we’re Predictably Irrational.

If we think about it, it’s easy to recognize herding. With a little forethought we can defend ourselves against groupthink. But what about self-herding – a notion that Ariely developed. Can you easily recognize it? Can you defend yourself against it?

Self-herding has to do with difficult questions. Daniel Kahneman pointed out that, when we’re asked a hard question, we often substitute an easy question and answer that instead. Here’s a hard question, “How likely is it that you’ll be shot in your neighborhood?” We don’t know the answer, so we substitute an easier question: “How many neighborhood shooting incidents can I recall from memory?” If we can remember many such incidents, then we assume that a recurrence is highly probable. This is known as the availability bias – we assume that things that are easily available to memory are likely to happen again.

Self-herding is a variant of the availability bias. As Ariely points out, it’s not easy to answer a question like, “What’s the best place to eat in your neighborhood?” So we substitute an easier question, “Where have I eaten before that I really liked?” Ariely notes that, “We can consult our preferences or we can consult our memory. It turns out it’s often easier to consult our memory.”

When you continue to choose The Freckled Beauty over Grover’s Grind, you’re herding yourself. It was the right decision at one time and you assume that it continues to be the right decision. It’s an efficient way to think. It’s also easy – you use your memory rather than your thinking muscles.

But, as we all know, things change. In fact, the speed of change seems to be accelerating. If the conditions that led to our initial decision change, then the decision is no longer valid. We can miss important opportunities and make serious mistakes. Every now and then, we need to un-herd ourselves.

Seldom Right. Never In Doubt.

I'm never wrong. About anything.

I’m never wrong. About anything.

Since I began teaching critical thinking four years ago, I’ve bought a lot of books on the subject. The other day, I wondered how many of those books I’ve actually read.

So, I made two piles on the floor. In one pile, I stacked all the books that I have read (some more than once). In the other pile, I stacked the books that I haven’t read.

Guess what? The unread stack is about twice as high as the other stack. In other words, I’ve read about a third of the books I’ve acquired on critical thinking and have yet to read about two-thirds.

What can I conclude from this? My first thought: I need to take a vacation and do a lot of reading. My second thought: Maybe I shouldn’t mention this to the Dean.

I also wondered, how much do I not know? Do I really know only a third of what there is to know about the topic? Maybe I know more since there’s bound to be some repetition in the books. Or maybe I know less since my modest collection may not cover the entire topic. Hmmm…

The point is that I’m thinking about what I don’t know rather than what I do know. That instills in me a certain amount of doubt. When I make assertions about critical thinking, I add cautionary words like perhaps or maybe or the evidence suggests. I leave myself room to change my position as new knowledge emerges (or as I acquire knowledge that’s new to me).

I suspect that the world might be better off if we all spent more time thinking about what we don’t know. And it’s not just me. The Dunning-Kruger effect states essentially the same thing.

David Dunning and Justin Kruger, both at Cornell, study cognitive biases. In their studies, they documented a bias that we now call illusory superiority. Simply put, we overestimate our own abilities and skills compared to others. More specifically, the less we know about a given topic, the more likely we are to overestimate our abilities. In other words, the less we know, the more confident we are in our opinions. As David Dunning succinctly puts it, “…incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are.”

The opposite seems to be true as well. Highly competent people tend to underestimate their competence relative to others. The thinking goes like this: If it’s easy for me, it must be easy for others as well. I’m not so special.

I’ve found that I can use the Dunning-Kruger effect as a rough-and-ready test of credibility. If a source provides a small amount of information with a high degree of confidence, then their credibility declines in my estimation. On the other hand, if the source provides a lot of information with some degree of doubt, then their credibility rises. It’s the difference between recognizing a wise person and a fool.

Perhaps we can use the same concept to greater effect in our teaching. When we learn about a topic, we implicitly learn about what we don’t know. Maybe we should make it more explicit. Maybe we should count the books we’ve read and the books we haven’t read to make it very clear just how much we don’t know. If we were less certain of our opinions, we would be more open to other people and intriguing new ideas. That can’t be a bad thing.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives