Strategy. Innovation. Brand.

Critical Thinking

Critical Thinking — Ripped From The Headlines!

Just because we see a path doesn’t mean we should follow it.

 

Daniel Kahneman, the psychologist who won the Nobel prize in economics, reminds us that, “What you see is not all there is.” I thought about Kahneman when I saw the videos and coverage of the teenagers wearing MAGA hats surrounding, and apparently mocking, a Native American activist who was singing a tribal song during a march in Washington, D.C.

The media coverage essentially came in two waves. The first wave concluded that the teenagers were mocking, harassing, and threatening the activist. Here are some headlines from the first wave:

 

ABC News: “Viral video of Catholic school teens in ‘MAGA’ caps taunting Native Americans draws widespread condemnation; prompts a school investigation.”

Time Magazine: “Kentucky Teens Wearing ‘MAGA’ Hats Taunt Indigenous Peoples March Participants In Viral Video.”

Evening Standard (UK): “Outrage as teens in MAGA hats ‘mock’ Native American Vietnam War veteran.”

The second media wave provided a more nuanced view. Here are some more recent headlines:

New York Times: “Fuller Picture Emerges of Viral Video of Native American Man and Catholic Students.”

The Guardian (UK): “New video sheds more light on students’ confrontation with Native American.”

The StrangerI Thought the MAGA Boys Were S**t-Eating Monsters. Then I Watched the Full Video.”

So, who is right and who is wrong? I’m not sure that we can draw any certain conclusions. I certainly do have some opinions but they are all based on very short video clips that are taken out of context.

What lessons can we draw from this? Here are a few:

  • Reality is complicated and — even in the best circumstances — we only see a piece of it.
  • We see what we expect to see. Tell me how you voted, and I can guess what you saw.
  • It’s very hard to draw firm conclusions from a brief slice-of-time sources such as a photograph or a video clip. The Atlantic magazine has an in-depth story about how this story evolved. One key sentence: “Photos by definition capture instants of time, and remove them from the surrounding flow.”
  • There’s an old saying that “Journalism is the first draft of history”. Photojournalism is probably the first draft of the first draft. It’s often useful to wait to see how the story evolves. Slowing down a decision process usually results in a better decision.
  • It’s hard to read motives from a picture.
  • Remember that what we see is not all there is. As the Heath brothers write in their book, Decisive, move your spotlight around to avoid narrow framing.
  • Humans don’t like to be uncertain. We like to connect the dots and explain things even when we don’t have all the facts. But, sometimes, uncertainty is the best we can hope for. When you’re uncertain, remember the lessons of Appreciative Inquiry and don’t default to the negative.

 

Clembama Redux

Go Clembama!

Alabama and Clemson have met each year for the past four years in the college football playoffs. Alabama has won two games; Clemson has won two. The aggregate score of the four games: Clemson 121 — Alabama 120. If Alabama hadn’t missed an extra point in last night’s game, the aggregate score would be tied. The two teams are so close that they might as well be one. Let’s call them Clembama.

Meanwhile, no other team has come close. The great teams of years past – Notre Dame, Oklahoma, Georgia, Southern Cal, Nebraska, and Texas – have all fallen by the wayside. When they match up against Clemson or Alabama, they don’t lose by inches. They lose by yards.

What’s it all mean? Simply that skill is unevenly distributed in college football. As Michael Mauboussin points out, when skill is evenly distributed, luck plays a greater role in the outcome of any competitive event, including sports and business competition. When skill is unevenly distributed, luck’s role is greatly diminished.

It seems counter-intuitive that luck should be more important in some situations than in others. Isn’t luck more or less random? Shouldn’t it apply equally in all situations? It’s true that luck is essentially random but when everything else is even, even a little bit of luck can make a huge difference. A funny bounce, an odd hop, a slippery field can determine who wins and who loses.

To see the difference, just look at the NFL, where skill is more evenly distributed. More specifically, look at Sunday’s game between the Chicago Bears and the Philadelphia Eagles. The Eagles were ahead by one point when the Bears maneuvered into position to kick a field goal near the end of the game. Make the field goal and the Bears win. Miss it and the Eagles win. The Bears kicked, the ball hit an upright, bounced downward, hit the crossbar, and then bounced back into the field of play. A bouncing football is a pretty random thing. If the ball had bounced off the crossbar and through, the Bears would have won. As it was, the Eagles won. In truth, luck – not skill –determined the outcome.

If Oklahoma, say, had made the same kick the last time they played Alabama, it would not have made a whit of difference. The game wasn’t close. The skill levels weren’t close. Luck didn’t matter.

Mauboussin’s paradox of skill states that: “In activities that involve some luck, the improvement of skill makes luck more important…” The paradox makes me feel somewhat humble. My business career was in the highly competitive computing industry, where skill is very widely distributed. As I look back on both my successes and my failures, I wonder how many were caused by skill (or lack of it) and how many were caused by luck. When I won, maybe it was because I was more skilled. Or maybe I just got lucky.

I first wrote about Clembama two years ago. Click here to find that article, which includes several links to Michael Mauboussin’s work.

False Consensus and Preference Falsification

Bye bye!

The Soviet Union collapsed on December 26, 1991. While signs of decay had been growing, the final collapse happened with unexpected speed. The union disappeared almost overnight and surprisingly few Soviet citizens bothered to defend it. Though it had seemed stable – and persistent – even a few months earlier, it evaporated with barely a whimper.

We could (and probably will) debate for years why the USSR disappeared, I suspect that two cognitive biases — false consensus and preference falsification — were significant contributors. Simply put, many people lied. They said that they supported the regime when, in fact, they did not. When they looked to others for their opinions, those people also lied about their preferences. It seemed that people widely supported the government. They said so, didn’t they? Since a majority seemed to agree, it was reasonable to assume that the government would endure. Best to go with the flow. But when cracks in the edifice appeared, they quickly brought down the entire structure.

Why would people lie about their preferences? Partially because they believed that a consensus existed in the broader community. In such situations, one might lie because of:

  • A desire to stay in step with majority opinion — this is essentially a sociocentric bias. We enhance our self-esteem by agreeing with the majority.
  • A desire to remain politically correct – this may be fear-induced, especially in authoritarian regimes.
  • Lack of information – when information is scarce, we may assume that the majority (as we perceive it) is probably right. We should go along.

False consensus and preference falsification can lead to illogical outcomes such as the Abilene paradox. Nobody wanted to go to Abilene but each person thought that everybody else wanted to go to Abilene … so they all went. A false consensus existed and everybody played along.

We can also see this happening with the risky shift. Groups tend to make riskier decisions than individuals. Why? Oftentimes, it’s because of a false consensus. Each member of the group assumes that other members of the group favor the riskier strategy. Nobody wants to be seen as a wimp, so each member agrees. The decision is settled – everybody wants to do it. This is especially problematic in cultures that emphasize teamwork.

Reinhold Niebuhr may have originated this stream of thought in his book Moral Man and Immoral Society, originally published in 1932. Niebuhr argued that individual morality and social morality were incompatible. We make individual decisions based on our moral understanding. We make collective decisions based on our understanding of what society wants, needs, and demands. More succinctly, Reason is not the sole basis of moral virtue in men. His social impulses are more deeply rooted than his rational life.”

In 1997, the economist, Timur Kuran, updated this thinking with his book, Private Truths, Public Lies. While Niebuhr focused on the origins of such behavior, Kuran focused more attention on the outcomes. He notes that preference falsification helps preserve “widely disliked structures” and provides an “aura of stability on structures vulnerable to sudden collapse.” Further, “When the support of a policy, tradition, or regime is largely contrived, a minor event may activate a bandwagon that generates massive yet unanticipated change.”

How can we mitigate the effects of such falsification? Like other cognitive biases, I doubt that we can eliminate the bias itself. As Lady Gaga sings, we were born this way. The best we can do is to be aware of the bias and question our decisions, especially when our individual (private) preferences differ from the (perceived) preferences of the group. When someone says, “Let’s go to Abilene” we can ask, “Really? Does anybody really want to go to Abilene?” We might be surprised at the answer.

 

 

Using A Razor To Whittle Your Choices

Different razors for different needs.

Let’s say that you’re trying to decide what causes what and keep getting stuck between multiple alternatives. (This is a process known as Inference to the Best Explanation). You’ve put away your cognitive biases and built arguments that are sound and valid. Your friends give you a lot of advice. But you just can’t decide.

Logical razors can help you out of the jam. A razor helps you eliminate choices, which is often as important as creating options in the first place. A razor says, “It’s not likely to be this one ….” By eliminating options, you make your decision easier. You’re more likely to find the best explanation.

Razors don’t use airtight logic so they’re not foolproof. They could conceivably point you away from a solution that actually works. In general, however, they give you a process for working through ideas and eliminating the least probable ones. Here are my favorite razors.

Occam’s razor – among competing explanations, the one that makes the fewest assumptions is most likely to be right. This was the original razor and comes from William of Ockham, a Franciscan friar who lived in the late 13th and early 14th centuries.

Hitchen’s razor – that which can be asserted without evidence can also be dismissed without evidence. We should, of course, ask for evidence to back up a hypothesis. If no evidence exists, we can safely dismiss it.

Hanlon’s razor – never attribute to malice that which can be adequately explained by stupidity. If someone injures you, don’t assume that they did so with malicious intent. It’s more likely that they’re just stupid.

Hume’s razor – if a presumed cause is not sufficient to create an observed effect, we must either eliminate the cause from consideration or show what needs to be added to create the effect.

Adler’s razor – a question that can’t be settled by experimentation is not worth debating.

Logical razors help you scrape away explanations that are possible but not probable. They can help you think more clearly. But they don’t always lead you to a conclusion. At some point, you may have to make Pascal’s Wager.

That’s Irrelevant!

In her book, Critical Thinking: An Appeal To Reason, Peg Tittle has an interesting and useful way of organizing 15 logical fallacies. Simply put, they’re all irrelevant to the assessment of whether an argument is true or not. Using Tittle’s guidelines, we can quickly sort out what we need to pay attention to and what we can safely ignore.

Though these fallacies are irrelevant to truth, they are very relevant to persuasion. Critical thinking is about discovering the truth; it’s about the present and the past. Persuasion is about the future, where truth has yet to be established. Critical thinking helps us decide what we can be certain of. Persuasion helps us make good choices when we’re uncertain. Critical thinking is about truth; persuasion is about choice. What’s poison to one is often catnip to the other.

With that thought in mind, let’s take a look at Tittle’s 15 irrelevant fallacies. If someone tosses one of these at you in a debate, your response is simple: “That’s irrelevant.”

  1. Ad hominem: character– the person who created the argument is evil; therefore the argument is unacceptable. (Or the reverse).
  2. Ad hominem: tu quoque– the person making the argument doesn’t practice what she preaches. You say, it’s wrong to hunt but you eat meat.
  3. Ad hominem: poisoning the well— the person making the argument has something to gain. He’s not disinterested. Before you listen to my opponent, I would like to remind you that he stands to profit handsomely if you approve this proposal.
  4. Genetic fallacy– considering the origin, not the argument. The idea came to you in a dream. That’s not acceptable.
  5. Inappropriate standard: authority– An authority may claim to be an expert but experts are biased in predictable ways.
  6. Inappropriate standard: tradition – we accept something because we have traditionally accepted it. But traditions are often fabricated.
  7. Inappropriate standard: common practice – just because everybody’s doing it doesn’t make it true. Appeals to inertia and status quo.Example: most people peel bananas from the “wrong” end.
  8. Inappropriate standard: moderation/extreme – claims that an argument is true (or false) because it is too extreme (or too moderate). AKA the Fallacy of the Golden Mean.
  9. Appeal to popularity: bandwagon effect– just because an idea is popular doesn’t make it true. Often found in ads.
  10. Appeal to popularity: elites only a few can make the grade or be admitted. Many are called; few are chosen.
  11. Two wrongs –our competitors have cheated, therefore, it’s acceptable for us to cheat.
  12. Going off topic: straw man or paper tiger– misrepresenting the other side. Responding to an argument that is not the argument presented.
  13. Going off topic: red herring – a false scent that leads the argument astray. Person 1: We shouldn’t do that. Person 2: If we don’t do it, someone else will.
  14. Going off topic: non sequitir – making a statement that doesn’t follow logically from its antecedents. Person 1: We should feed the poor. Person 2: You’re a communist.
  15. Appeals to emotion– incorporating emotions to assess truth rather than using logic. You say we should kill millions of chickens to stop the avian flu. That’s disgusting.

Chances are that you’ve used some of these fallacies in a debate or argument. Indeed, you may have convinced someone to choose X rather than Y using them. Though these fallacies may be persuasive, it’s useful to remember that they have nothing to do with truth.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives