Red people and blue people are at it again. Neither side seems to accept that the other side consists of real people with real ideas that are worth listening to. Debate is out. Contempt is in.
As a result, our nation is highly polarized. To work our way out of the current stalemate, we need to listen closely and speak wisely. We need to debate effectively rather than arguing angrily. Here are some tips:
It’s not about winning, it’s about winning over – too often we talk about winning an argument. But defeating an opponent is not the same as winning him over to your side. Aim for agreement, not a crushing blow.
It’s not about values – our values are deeply held. We don’t change them easily. You’re not going to convert a red person into a blue person or vice-versa. Aim to change their minds, not their values.
Stick to the future tense – the only reason to argue in the past tense is to assign blame. That’s useful in a court of law but not in the court of public opinion. Stick to the future tense, where you can present choices and options. That’s where you can change minds. (Tip: don’t ever argue with a loved one in the past tense. Even if you win, you lose.)
The best way to disagree is to begin by agreeing – the other side wants to know that you take them seriously. If you immediately dismiss everything they say, you’ll never persuade them. Start by finding points of agreement. Even if you’re at opposite ends of the spectrum, you can find something to agree to.
Don’t fall for the anger mongers – both red and blue commentators prey on our pride to sell anger. They say things like, “The other side hates you. They think you’re dumb. They think they’re superior to you.” The technique is known as attributed belittlement and it’s the oldest trick in the book. Don’t fall for it.
Don’t fall into the hypocrisy trap – both red and blue analysts are willing to spin for their own advantage. Don’t assume that one side is hypocritical while the other side is innocent.
Beware of demonizing words – it’s easy to use positive words for one side and demonizing words for the other side. For example: “We’re proud. They’re arrogant.” “We’re smart. They’re sneaky.” It’s another old trick. Don’t fall for it.
Show some respect – just because people disagree with you is no reason to treat them with contempt. They have their reasons. Show some respect even if you disagree.
Be skeptical – the problems we’re facing as a nation are exceptionally complex. Anyone who claims to have a simple solution is lying.
Burst your bubble – open yourself up to sources you disagree with. Talk with people on the other side. We all live in reality bubbles. Time to break out.
Give up TV — talking heads, both red and blue, want to tell you what to think. Reading your own sources can help you learn how to think.
Aim for the persuadable – you’ll never convince some people. Don’t waste your breath. Talk with open-minded people who describe themselves as moderates. How can you tell they’re open-minded? They show respect, don’t belittle, agree before disagreeing, and are skeptical of both sides.
Engage in arguments – find people who know how to argue without anger. Argue with them. If they’re red, take a blue position. If they’re blue, take a red position. Practice the art of arguing. You’re going to need it.
Remember that the only thing worse than arguing is not arguing – We know how to argue. Now we need to learn to argue without anger. Our future may depend on it.
Most people (in America at least) would probably agree with the following statement:
Men are bigger risk takers than women.
Several research studies seem to have documented this. Researchers have asked people what risky behaviors they engage in (or would like to engage in). For instance, they might ask a randomly selected group of men and women whether they would like to jump out of an airplane (with a parachute). Men – more often than women – say that this is an appealing idea. Ask about driving a motorcycle and the response is more or less the same. Men are interested, women not so much. QED: men are bigger risk takers than women.
But are we taking a conceptual leap here (without a parachute)? How do we know if something is true? What’s the operational definition of “risk”? Should we be engaging our baloney detectors right about now?
In her new book, Testosterone Rex, Cordelia Fine suggests that we’ve pretty much got it all backwards. The problem with the using skydiving and motorcycle driving as proxies for risk is that they are far too narrow. Indeed, they are narrowly masculine definitions of risk. So, in effect, we’re asking a different question:
Would you like to engage in activities that most men define as risky?
It’s a circular argument. We give a masculine definition of risk and then conclude that men are more likely to engage in that activity than women. No duh.
Fine points out that, “In the United States, being pregnant is about 20 times more likely to result in death than is a sky dive.” So which gender is really taking the big risks?
As with so many issues in logic and critical thinking, we need to examine our definitions. If we define our variables in narrow ways, we’ll get narrow and – most likely – biased results.
Fine writes that many people believe in in Testosterone Rex – that differences between man and women are biological and driven largely by hormonal effects. But when she examines the evidence, she finds one logical flaw after another. Researchers skew definitions, reverse cause-and-effect, and use small samples to produce large (and unsupported) conclusions.
Ultimately, Fine concludes that we aren’t born as males and females in the traditional way that we think about gender. Rather, when we’re born, society starts to shape us into society’s conception of what the gender ought to be. It’s a bracing and clearly argued point that seems to be backed up by substantial evidence.
It’s also a great example of baloney detection and a good case study for any class in critical thinking.
In my critical thinking class, we investigate a couple of dozen cognitive biases — fallacies in the way our brains process information and reach decisions. These include the confirmation bias, the availability bias, the survivorship bias, and many more. I call these factory-installed biases – we’re born this way.
But we haven’t asked the question behind the biases: why are we born that way? What’s the point of thinking fallaciously? From an evolutionary perspective, why haven’t these biases been bred out of us? After all, what’s the benefit of being born with, say, the confirmation bias?
Elizabeth Kolbert has just published an interesting article in The New Yorker that helps answer some of these questions. (Click here). The article reviews three new books about how we think:
Kolbert writes that the basic idea that ties these books together is sociability as opposed to logic. Our brains didn’t evolve to be logical. They evolved to help us be more sociable. Here’s how Kolbert explains it:
“Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.”
So, the confirmation bias, for instance, doesn’t help us make good, logical decisions but it does help us cooperate with others. If you say something that confirms what I already believe, I’ll accept your wisdom and think more highly of you. This helps us confirm our alliance to each other and unifies our group. I know I can trust you because you see the world the same way I do.
If, on the other hand, someone in another group says something that disconfirms my belief, I know the she doesn’t agree with me. She doesn’t see the world the same way I do. I don’t see this as a logical challenge but as a social challenge. I doubt that I can work effectively with her. Rather than checking my facts, I check her off my list of trusted cooperators. An us-versus-them dynamic develops, which solidifies cooperation in my group.
Mercier and Sperber, in fact, change the name of the confirmation bias to the “myside bias”. I cooperate with my side. I don’t cooperate with people who don’t confirm my side.
Why wouldn’t the confirmation/myside bias have gone away? Kolbert quotes Mercier and Sperber: ““This is one of many cases in which the environment changed too quickly for natural selection to catch up.” All we have to do is wait 1,000 generations or so. Or maybe we can program artificial intelligence to solve the problem.
I just spotted this article on Inc. magazine’s website:
The article’s subhead is: “America’s 25 most admired CEOs have earned the respect of their people. Here’s how you can too.”
Does this sound familiar? It’s a good example of the survivorship fallacy. (See also here and here). The 25 CEOs selected for the article “survived” a selection process. The author then highlights the common behaviors among the 25 leaders. The implication is that — if you behave the same way — you too will become a revered leader.
Is it true? Well, think about the hundreds of CEOs who didn’t survive the selection process. I suspect that many of the unselected CEOs behave in ways that are similar to the 25 selectees. But the unselected CEOs didn’t become revered leaders. Why not? Hard to say …precisely because we’re not studying them. It’s not at all clear to me that I will become a revered leader if I behave like the 25 selectees. In fact, the reverse my be true — people may think that I’m being inauthentic and lose respect for me.
A better research method would be to select 25 leaders who are “revered” and compare them to 25 leaders who are not “revered”. (Defining what “revered” means will be slippery). By selecting two groups, we have some basis for comparison and contrast. This can often lead to deeper insights.
As it stands, the Inc. article reminds me of the book for teenagers called How To Be Popular. It’s cute but not very meaningful.
In last year’s NCAA football championship game, Alabama beat Clemson by a score of 45 to 40.
In this year’s NCAA football championship game, Clemson beat Alabama by a score of 35 to 31.
The aggregate score is 76 to 75 in favor of Alabama.
So, which team is more skilled?
To ponder the question, we need to return to Michael Mauboussin’s ideas* about skill and luck – and, especially, his concept of the paradox of skill.
Let’s start with definitions for skill and luck. For Mauboussin, a key question helps us identify skill: Can I lose on purpose? If the answer is yes, then some skill must be involved in the process, whether you’re shooting hoops or playing poker. If the answer is no, then the process is random – it’s a matter of luck.
Most processes – like NCAA football games – involve both skill and luck. How can we sort out the differences between the two? Was Alabama more skilled last year or just luckier? What about Clemson this year?
Mauboussin’s paradox of skill can help us sort this out. Simply put, the paradox states that: “In activities that involve some luck, the improvement of skill makes luck more important…” We have training programs that can improve skills in many competitive activities, including sports, business performance, combat, and perhaps, even investing. As more people take advantage of these programs and average skill levels improve, you might think that luck would become less important in determining outcomes.
Mauboussin says that exactly the opposite is true. The big issue is skill differential and distribution. If a given skill is unevenly distributed in a society, then skill likely determines the outcome. Luck doesn’t have a chance to worm its way in. On the other hand, if skill is broadly and evenly distributed, then even minor fluctuations in luck can change the outcome.
As an example, Mauboussin cites the difference between the winning time and the time for the 20th finisher in the men’s Olympic marathon. In 1932, the difference was 39 minutes. In 2012, it was 7.5 minutes. Clearly, the skill of marathon running has become more evenly distributed over the past 80 years. We have more people with greater skills more evenly distributed than we had in the past. As a result, the marathon has become much more competitive.
Paradoxically, as the marathon has become more competitive, luck plays a greater role. Let’s say that the 1932 winner had the bad luck of stepping in a pothole at Mile 22 and had to limp to the finish line. Because he had so much more skill than the other runners, he might still have won the race. If the 2012 winner stepped in the same pothole, chances are the other (highly skilled) runners would have caught and passed him. He would have lost because of bad luck.
The paradox of skill should teach us some humility and helps to illuminate the illusion of control. We may think we’re successful because we’re skilled and talented and can control the events around us. But oftentimes – especially when skill is evenly distributed – it’s nothing more than an illusion. It’s just plain luck.
And what about Clemson and Alabama? My interpretation is that both teams are perfectly balanced in terms of skills. So the outcome depends almost entirely on luck: a lucky bounce, a stray breeze, a bad call, a slippery turf, and so on. Let’s celebrate two great teams that have separated themselves from the pack but not from each other. Perhaps we should call them Clembama.
* I used several sources for Mauboussin’s ideas. His 2012 book, The Success Equation, is here. In 2012, he also gave a very succinct presentation to the CFA Institute. That paper is here. His HBR article from 2011 is here. In 2014, he gave a lecture as part of the Authors at Google series – you can find the video here. And David Hurst’s very enlightening review of Mauboussin’s book is here.