Daniel Kahneman, the psychologist who won the Nobel prize in economics, reminds us that, “What you see is not all there is.” I thought about Kahneman when I saw the videos and coverage of the teenagers wearing MAGA hats surrounding, and apparently mocking, a Native American activist who was singing a tribal song during a march in Washington, D.C.
The media coverage essentially came in two waves. The first wave concluded that the teenagers were mocking, harassing, and threatening the activist. Here are some headlines from the first wave:
ABC News: “Viral video of Catholic school teens in ‘MAGA’ caps taunting Native Americans draws widespread condemnation; prompts a school investigation.”
Time Magazine: “Kentucky Teens Wearing ‘MAGA’ Hats Taunt Indigenous Peoples March Participants In Viral Video.”
Evening Standard (UK): “Outrage as teens in MAGA hats ‘mock’ Native American Vietnam War veteran.”
The second media wave provided a more nuanced view. Here are some more recent headlines:
New York Times: “Fuller Picture Emerges of Viral Video of Native American Man and Catholic Students.”
The Guardian (UK): “New video sheds more light on students’ confrontation with Native American.”
The Stranger: “I Thought the MAGA Boys Were S**t-Eating Monsters. Then I Watched the Full Video.”
So, who is right and who is wrong? I’m not sure that we can draw any certain conclusions. I certainly do have some opinions but they are all based on very short video clips that are taken out of context.
What lessons can we draw from this? Here are a few:
Alabama and Clemson have met each year for the past four years in the college football playoffs. Alabama has won two games; Clemson has won two. The aggregate score of the four games: Clemson 121 — Alabama 120. If Alabama hadn’t missed an extra point in last night’s game, the aggregate score would be tied. The two teams are so close that they might as well be one. Let’s call them Clembama.
Meanwhile, no other team has come close. The great teams of years past – Notre Dame, Oklahoma, Georgia, Southern Cal, Nebraska, and Texas – have all fallen by the wayside. When they match up against Clemson or Alabama, they don’t lose by inches. They lose by yards.
What’s it all mean? Simply that skill is unevenly distributed in college football. As Michael Mauboussin points out, when skill is evenly distributed, luck plays a greater role in the outcome of any competitive event, including sports and business competition. When skill is unevenly distributed, luck’s role is greatly diminished.
It seems counter-intuitive that luck should be more important in some situations than in others. Isn’t luck more or less random? Shouldn’t it apply equally in all situations? It’s true that luck is essentially random but when everything else is even, even a little bit of luck can make a huge difference. A funny bounce, an odd hop, a slippery field can determine who wins and who loses.
To see the difference, just look at the NFL, where skill is more evenly distributed. More specifically, look at Sunday’s game between the Chicago Bears and the Philadelphia Eagles. The Eagles were ahead by one point when the Bears maneuvered into position to kick a field goal near the end of the game. Make the field goal and the Bears win. Miss it and the Eagles win. The Bears kicked, the ball hit an upright, bounced downward, hit the crossbar, and then bounced back into the field of play. A bouncing football is a pretty random thing. If the ball had bounced off the crossbar and through, the Bears would have won. As it was, the Eagles won. In truth, luck – not skill –determined the outcome.
If Oklahoma, say, had made the same kick the last time they played Alabama, it would not have made a whit of difference. The game wasn’t close. The skill levels weren’t close. Luck didn’t matter.
Mauboussin’s paradox of skill states that: “In activities that involve some luck, the improvement of skill makes luck more important…” The paradox makes me feel somewhat humble. My business career was in the highly competitive computing industry, where skill is very widely distributed. As I look back on both my successes and my failures, I wonder how many were caused by skill (or lack of it) and how many were caused by luck. When I won, maybe it was because I was more skilled. Or maybe I just got lucky.
I first wrote about Clembama two years ago. Click here to find that article, which includes several links to Michael Mauboussin’s work.
The Soviet Union collapsed on December 26, 1991. While signs of decay had been growing, the final collapse happened with unexpected speed. The union disappeared almost overnight and surprisingly few Soviet citizens bothered to defend it. Though it had seemed stable – and persistent – even a few months earlier, it evaporated with barely a whimper.
We could (and probably will) debate for years why the USSR disappeared, I suspect that two cognitive biases — false consensus and preference falsification — were significant contributors. Simply put, many people lied. They said that they supported the regime when, in fact, they did not. When they looked to others for their opinions, those people also lied about their preferences. It seemed that people widely supported the government. They said so, didn’t they? Since a majority seemed to agree, it was reasonable to assume that the government would endure. Best to go with the flow. But when cracks in the edifice appeared, they quickly brought down the entire structure.
Why would people lie about their preferences? Partially because they believed that a consensus existed in the broader community. In such situations, one might lie because of:
False consensus and preference falsification can lead to illogical outcomes such as the Abilene paradox. Nobody wanted to go to Abilene but each person thought that everybody else wanted to go to Abilene … so they all went. A false consensus existed and everybody played along.
We can also see this happening with the risky shift. Groups tend to make riskier decisions than individuals. Why? Oftentimes, it’s because of a false consensus. Each member of the group assumes that other members of the group favor the riskier strategy. Nobody wants to be seen as a wimp, so each member agrees. The decision is settled – everybody wants to do it. This is especially problematic in cultures that emphasize teamwork.
Reinhold Niebuhr may have originated this stream of thought in his book Moral Man and Immoral Society, originally published in 1932. Niebuhr argued that individual morality and social morality were incompatible. We make individual decisions based on our moral understanding. We make collective decisions based on our understanding of what society wants, needs, and demands. More succinctly, “Reason is not the sole basis of moral virtue in men. His social impulses are more deeply rooted than his rational life.”
In 1997, the economist, Timur Kuran, updated this thinking with his book, Private Truths, Public Lies. While Niebuhr focused on the origins of such behavior, Kuran focused more attention on the outcomes. He notes that preference falsification helps preserve “widely disliked structures” and provides an “aura of stability on structures vulnerable to sudden collapse.” Further, “When the support of a policy, tradition, or regime is largely contrived, a minor event may activate a bandwagon that generates massive yet unanticipated change.”
How can we mitigate the effects of such falsification? Like other cognitive biases, I doubt that we can eliminate the bias itself. As Lady Gaga sings, we were born this way. The best we can do is to be aware of the bias and question our decisions, especially when our individual (private) preferences differ from the (perceived) preferences of the group. When someone says, “Let’s go to Abilene” we can ask, “Really? Does anybody really want to go to Abilene?” We might be surprised at the answer.
Let’s say that you’re trying to decide what causes what and keep getting stuck between multiple alternatives. (This is a process known as Inference to the Best Explanation). You’ve put away your cognitive biases and built arguments that are sound and valid. Your friends give you a lot of advice. But you just can’t decide.
Logical razors can help you out of the jam. A razor helps you eliminate choices, which is often as important as creating options in the first place. A razor says, “It’s not likely to be this one ….” By eliminating options, you make your decision easier. You’re more likely to find the best explanation.
Razors don’t use airtight logic so they’re not foolproof. They could conceivably point you away from a solution that actually works. In general, however, they give you a process for working through ideas and eliminating the least probable ones. Here are my favorite razors.
Occam’s razor – among competing explanations, the one that makes the fewest assumptions is most likely to be right. This was the original razor and comes from William of Ockham, a Franciscan friar who lived in the late 13th and early 14th centuries.
Hitchen’s razor – that which can be asserted without evidence can also be dismissed without evidence. We should, of course, ask for evidence to back up a hypothesis. If no evidence exists, we can safely dismiss it.
Hanlon’s razor – never attribute to malice that which can be adequately explained by stupidity. If someone injures you, don’t assume that they did so with malicious intent. It’s more likely that they’re just stupid.
Hume’s razor – if a presumed cause is not sufficient to create an observed effect, we must either eliminate the cause from consideration or show what needs to be added to create the effect.
Adler’s razor – a question that can’t be settled by experimentation is not worth debating.
Logical razors help you scrape away explanations that are possible but not probable. They can help you think more clearly. But they don’t always lead you to a conclusion. At some point, you may have to make Pascal’s Wager.
In her book, Critical Thinking: An Appeal To Reason, Peg Tittle has an interesting and useful way of organizing 15 logical fallacies. Simply put, they’re all irrelevant to the assessment of whether an argument is true or not. Using Tittle’s guidelines, we can quickly sort out what we need to pay attention to and what we can safely ignore.
Though these fallacies are irrelevant to truth, they are very relevant to persuasion. Critical thinking is about discovering the truth; it’s about the present and the past. Persuasion is about the future, where truth has yet to be established. Critical thinking helps us decide what we can be certain of. Persuasion helps us make good choices when we’re uncertain. Critical thinking is about truth; persuasion is about choice. What’s poison to one is often catnip to the other.
With that thought in mind, let’s take a look at Tittle’s 15 irrelevant fallacies. If someone tosses one of these at you in a debate, your response is simple: “That’s irrelevant.”
Chances are that you’ve used some of these fallacies in a debate or argument. Indeed, you may have convinced someone to choose X rather than Y using them. Though these fallacies may be persuasive, it’s useful to remember that they have nothing to do with truth.