Most people (in America at least) would probably agree with the following statement:
Men are bigger risk takers than women.
Several research studies seem to have documented this. Researchers have asked people what risky behaviors they engage in (or would like to engage in). For instance, they might ask a randomly selected group of men and women whether they would like to jump out of an airplane (with a parachute). Men – more often than women – say that this is an appealing idea. Ask about driving a motorcycle and the response is more or less the same. Men are interested, women not so much. QED: men are bigger risk takers than women.
But are we taking a conceptual leap here (without a parachute)? How do we know if something is true? What’s the operational definition of “risk”? Should we be engaging our baloney detectors right about now?
In her new book, Testosterone Rex, Cordelia Fine suggests that we’ve pretty much got it all backwards. The problem with the using skydiving and motorcycle driving as proxies for risk is that they are far too narrow. Indeed, they are narrowly masculine definitions of risk. So, in effect, we’re asking a different question:
Would you like to engage in activities that most men define as risky?
It’s a circular argument. We give a masculine definition of risk and then conclude that men are more likely to engage in that activity than women. No duh.
Fine points out that, “In the United States, being pregnant is about 20 times more likely to result in death than is a sky dive.” So which gender is really taking the big risks?
As with so many issues in logic and critical thinking, we need to examine our definitions. If we define our variables in narrow ways, we’ll get narrow and – most likely – biased results.
Fine writes that many people believe in in Testosterone Rex – that differences between man and women are biological and driven largely by hormonal effects. But when she examines the evidence, she finds one logical flaw after another. Researchers skew definitions, reverse cause-and-effect, and use small samples to produce large (and unsupported) conclusions.
Ultimately, Fine concludes that we aren’t born as males and females in the traditional way that we think about gender. Rather, when we’re born, society starts to shape us into society’s conception of what the gender ought to be. It’s a bracing and clearly argued point that seems to be backed up by substantial evidence.
It’s also a great example of baloney detection and a good case study for any class in critical thinking.
In my critical thinking class, we investigate a couple of dozen cognitive biases — fallacies in the way our brains process information and reach decisions. These include the confirmation bias, the availability bias, the survivorship bias, and many more. I call these factory-installed biases – we’re born this way.
But we haven’t asked the question behind the biases: why are we born that way? What’s the point of thinking fallaciously? From an evolutionary perspective, why haven’t these biases been bred out of us? After all, what’s the benefit of being born with, say, the confirmation bias?
Elizabeth Kolbert has just published an interesting article in The New Yorker that helps answer some of these questions. (Click here). The article reviews three new books about how we think:
Kolbert writes that the basic idea that ties these books together is sociability as opposed to logic. Our brains didn’t evolve to be logical. They evolved to help us be more sociable. Here’s how Kolbert explains it:
“Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.”
So, the confirmation bias, for instance, doesn’t help us make good, logical decisions but it does help us cooperate with others. If you say something that confirms what I already believe, I’ll accept your wisdom and think more highly of you. This helps us confirm our alliance to each other and unifies our group. I know I can trust you because you see the world the same way I do.
If, on the other hand, someone in another group says something that disconfirms my belief, I know the she doesn’t agree with me. She doesn’t see the world the same way I do. I don’t see this as a logical challenge but as a social challenge. I doubt that I can work effectively with her. Rather than checking my facts, I check her off my list of trusted cooperators. An us-versus-them dynamic develops, which solidifies cooperation in my group.
Mercier and Sperber, in fact, change the name of the confirmation bias to the “myside bias”. I cooperate with my side. I don’t cooperate with people who don’t confirm my side.
Why wouldn’t the confirmation/myside bias have gone away? Kolbert quotes Mercier and Sperber: ““This is one of many cases in which the environment changed too quickly for natural selection to catch up.” All we have to do is wait 1,000 generations or so. Or maybe we can program artificial intelligence to solve the problem.
I just spotted this article on Inc. magazine’s website:
The article’s subhead is: “America’s 25 most admired CEOs have earned the respect of their people. Here’s how you can too.”
Does this sound familiar? It’s a good example of the survivorship fallacy. (See also here and here). The 25 CEOs selected for the article “survived” a selection process. The author then highlights the common behaviors among the 25 leaders. The implication is that — if you behave the same way — you too will become a revered leader.
Is it true? Well, think about the hundreds of CEOs who didn’t survive the selection process. I suspect that many of the unselected CEOs behave in ways that are similar to the 25 selectees. But the unselected CEOs didn’t become revered leaders. Why not? Hard to say …precisely because we’re not studying them. It’s not at all clear to me that I will become a revered leader if I behave like the 25 selectees. In fact, the reverse my be true — people may think that I’m being inauthentic and lose respect for me.
A better research method would be to select 25 leaders who are “revered” and compare them to 25 leaders who are not “revered”. (Defining what “revered” means will be slippery). By selecting two groups, we have some basis for comparison and contrast. This can often lead to deeper insights.
As it stands, the Inc. article reminds me of the book for teenagers called How To Be Popular. It’s cute but not very meaningful.
In last year’s NCAA football championship game, Alabama beat Clemson by a score of 45 to 40.
In this year’s NCAA football championship game, Clemson beat Alabama by a score of 35 to 31.
The aggregate score is 76 to 75 in favor of Alabama.
So, which team is more skilled?
To ponder the question, we need to return to Michael Mauboussin’s ideas* about skill and luck – and, especially, his concept of the paradox of skill.
Let’s start with definitions for skill and luck. For Mauboussin, a key question helps us identify skill: Can I lose on purpose? If the answer is yes, then some skill must be involved in the process, whether you’re shooting hoops or playing poker. If the answer is no, then the process is random – it’s a matter of luck.
Most processes – like NCAA football games – involve both skill and luck. How can we sort out the differences between the two? Was Alabama more skilled last year or just luckier? What about Clemson this year?
Mauboussin’s paradox of skill can help us sort this out. Simply put, the paradox states that: “In activities that involve some luck, the improvement of skill makes luck more important…” We have training programs that can improve skills in many competitive activities, including sports, business performance, combat, and perhaps, even investing. As more people take advantage of these programs and average skill levels improve, you might think that luck would become less important in determining outcomes.
Mauboussin says that exactly the opposite is true. The big issue is skill differential and distribution. If a given skill is unevenly distributed in a society, then skill likely determines the outcome. Luck doesn’t have a chance to worm its way in. On the other hand, if skill is broadly and evenly distributed, then even minor fluctuations in luck can change the outcome.
As an example, Mauboussin cites the difference between the winning time and the time for the 20th finisher in the men’s Olympic marathon. In 1932, the difference was 39 minutes. In 2012, it was 7.5 minutes. Clearly, the skill of marathon running has become more evenly distributed over the past 80 years. We have more people with greater skills more evenly distributed than we had in the past. As a result, the marathon has become much more competitive.
Paradoxically, as the marathon has become more competitive, luck plays a greater role. Let’s say that the 1932 winner had the bad luck of stepping in a pothole at Mile 22 and had to limp to the finish line. Because he had so much more skill than the other runners, he might still have won the race. If the 2012 winner stepped in the same pothole, chances are the other (highly skilled) runners would have caught and passed him. He would have lost because of bad luck.
The paradox of skill should teach us some humility and helps to illuminate the illusion of control. We may think we’re successful because we’re skilled and talented and can control the events around us. But oftentimes – especially when skill is evenly distributed – it’s nothing more than an illusion. It’s just plain luck.
And what about Clemson and Alabama? My interpretation is that both teams are perfectly balanced in terms of skills. So the outcome depends almost entirely on luck: a lucky bounce, a stray breeze, a bad call, a slippery turf, and so on. Let’s celebrate two great teams that have separated themselves from the pack but not from each other. Perhaps we should call them Clembama.
* I used several sources for Mauboussin’s ideas. His 2012 book, The Success Equation, is here. In 2012, he also gave a very succinct presentation to the CFA Institute. That paper is here. His HBR article from 2011 is here. In 2014, he gave a lecture as part of the Authors at Google series – you can find the video here. And David Hurst’s very enlightening review of Mauboussin’s book is here.
I’ve written at various times about embodied cognition – the idea that the body influences the mind. (See here, here, and here.) In other words, our mind is not limited to our brain. We think with our bodies as well. You can improve your confidence by making yourself big. You can brighten your mood by putting a smile on your face. Want to feel morally pure? Take a bath.
How far does this extend? The clothes you wear, for instance, touch your body and mediate between your body and the world around you. It’s fair to ask: do the clothes you wear influence your thinking?
The answer is yes. Hajo Adam and Adam Galinsky introduced the term “enclothed cognition” in an article in the Journal of Experimental Social Psychology in July 2012. (Click here). They write that enclothed cognition describes, “…the systematic influence that clothes have on the wearer’s psychological processes.” They also suggest that two factors come into play: “the symbolic meaning of the clothes and the physical experience of wearing them.”
Many clothes have symbolic value. Take the humble white coat. In a hospital setting, we might assume that someone wearing a white coat is an expert or an authority. We behave differently towards her because of the coat’s symbolism. In other words, the coat affects the perceiver’s cognition and behavior. But does it affect the wearer’s cognition?
Adam and Galinsky conducted three experiments to find out. In the first, they divided randomly selected participants into two groups, one of which wore white lab coats, the other of which did not. The two groups then performed the Stroop test in which the word “blue” is printed in red or the word “green” is printed in yellow. The groups were asked to identify incongruities between the words and colors. The group wearing white lab coats performed about twice as well as the other group.
The second test used three groups. One group wore a white lab coat and believed that it was a doctor’s coat. The second group wore an identical white lab coat but believed that it was painter’s coat. The third group wore normal street clothes. The experimenters asked the three groups to spot discrepancies in a series of illustrations. Those who wore the doctor’s coat found more discrepancies than either of the other two groups. The symbolic value of a doctor’s coat had greater impact on attention than did the painter’s coat.
The third experiment was similar to the second except that some groups didn’t wear the doctor’s or painter’s coat; they merely observed them. Those who donned the doctor’s coat performed best.
The study suggests that the symbolic nature of clothing does indeed affect our cognition. Merely observing the clothes does not trigger the effect (or does so only mildly). Actually wearing the clothes has a meaningful impact on our thinking and behavior.
These studies suggest that our clothes not only affect how others perceive us. They also affect how we perceive ourselves. Even if no one sees us, our clothes influence our cognition. Perhaps, then, we can dress for success, even if we work alone. Similarly, wearing athletic clothes may well improve our chances of getting a good workout. Dressing like a member of the clergy may make us behave more ethically. Dressing like a slob may make us behave like a slob.
There’s one other wrinkle that was brought to my attention – oddly enough – by my spellchecker. When I wrote “enclothed cognition”, the spellchecker consistently converted it to “unclothed cognition”. This raises an interesting question. If clothes affect our cognition in certain ways, does the absence of clothes affect our cognition in other ways? Time for another study.