I just spotted this article on Inc. magazine’s website:
The article’s subhead is: “America’s 25 most admired CEOs have earned the respect of their people. Here’s how you can too.”
Does this sound familiar? It’s a good example of the survivorship fallacy. (See also here and here). The 25 CEOs selected for the article “survived” a selection process. The author then highlights the common behaviors among the 25 leaders. The implication is that — if you behave the same way — you too will become a revered leader.
Is it true? Well, think about the hundreds of CEOs who didn’t survive the selection process. I suspect that many of the unselected CEOs behave in ways that are similar to the 25 selectees. But the unselected CEOs didn’t become revered leaders. Why not? Hard to say …precisely because we’re not studying them. It’s not at all clear to me that I will become a revered leader if I behave like the 25 selectees. In fact, the reverse my be true — people may think that I’m being inauthentic and lose respect for me.
A better research method would be to select 25 leaders who are “revered” and compare them to 25 leaders who are not “revered”. (Defining what “revered” means will be slippery). By selecting two groups, we have some basis for comparison and contrast. This can often lead to deeper insights.
As it stands, the Inc. article reminds me of the book for teenagers called How To Be Popular. It’s cute but not very meaningful.
When I went off to college, my mother told me, “Now remember … you’re going to college to learn how to think. Don’t miss that lesson.”
I wonder what she would say if she were sending me off to college today. It might be more along the lines of, “Now remember … you’re going to college to get a good job. Don’t blow it.”
We can only judge programs and processes based on their goals. If the goal of government is to provide good services at a reasonable cost, we might give it a fairly low grade. However, if the goal of government is to increase employment, then we might evaluate it more positively. The same is true of higher education. So what is the goal of higher education? Is it to teach students how to think? Or is it to provide them skills to get a job?
I would argue that the goal of higher education – indeed of any education – is to improve the students’ ability to think. Good thinking can certainly help you get a job. In fact, it may be the ultimate job skill. But good thinking can take you much farther than a good job. Here’s my thinking on the issue:
1) Thinking is foundational — the essence of running a business (or a government) is to make decisions about the future. To make effective decisions, we need to understand how we think, how our thinking can be biased, and how to evaluate evidence and arguments. If we know everything about finance, for instance, but don’t know how to think effectively, we will make decisions based on faulty evidence, weak arguments, and unconscious biases. By chance, we might still make some good decisions. But we should remember Louis Pasteur’s thought, “Chance favors the prepared mind.”
2) The future is unknowable — our niece, Amelia, will graduate from college next May. If she works until she’s 65, she’ll retire in the year 2060. What skills will employers need in 2060? Who knows? As I reflect on my own education, the content I learned in college is largely useless today. The processes I learned, however, are still very relevant. Thinking is the ultimate process. Amelia will still need to think effectively in 2060.
3) Thinking promotes freedom – if we can’t think for ourselves, we will forever be buffeted by other people’s agendas, desires, ambitions, and rhetorical excesses. Critical thinking allows us to assess ideas and social movements and make effective decisions on our own. We can frame our thinking as we wish and not allow others to create frames for us. We can identify the truth rather than relying on others to tell us what is true. Critical thinking, allows us to take control of our own destiny, which is the essence of freedom. I don’t know of any other discipline that can make the same claim.
When faced with a difficult question, we often substitute a simpler question and answer that instead. Here are three examples:
In each case, we substitute a proxy for the original question. We assume that the proxy measures the same thing that the original question aimed to measure. Sometimes we’re right; sometimes we’re wrong. Most often, we don’t think about the fact that we’re using a proxy. System 1 does the thinking for us. But we can, in fact, bring the proxy to System 2 and evaluate whether it’s effective or not. If we think about it, we can use System 2 to spot errors in System 1. But we have to think about it.
As it happens, System 1 uses proxies in some situations that we might never think about. Here’s an example: How much food should you eat?
We tend to think of food in terms of quantity. System 1 also considers food as a source of energy. System 1 is trying to answer two questions: 1) How much energy does my body need? 2) How much food does that translate to?
Our bodies have learned that sweet food delivers more energy than non-sweet food and can use this to translate from energy needs to food requirements. Let’s say that the equation looks something like this:
1 calorie* of energy is generated by 10 grams of sweet food
Let’s also assume that our body has determined that we need 10 calories of energy. A simple calculation indicates that we need to eat 100 grams of sweet food. Once we’ve eaten 100 grams, System 1 can issue a directive to stop eating.
Now let’s change the scenario by introducing artificial sweeteners that add sweetness without adding many calories. The new translation table might look like this:
1 calorie of energy is generated by 30 grams of artificially sweetened food
If we still need 10 calories of energy, we will need to eat 300 grams of artificially sweetened food. System 1 issues a directive to stop only after we’ve eaten the requisite amount.
System 1 can’t tell the difference between artificially and naturally sweetened foods. It has only one translation table. If we eat a lot of artificially sweetened food, System 1 will learn the new translation table. If we then switch back to naturally sweetened foods, System 1 will still use the new translation table. It will still tell us to eat 300 grams of food to get 10 calories of energy.
We would never know that our brain makes energy/quantity assumptions if not for studies like this one. It’s not intuitively obvious that we need to invoke System 2 to examine the relationship between artificial sweeteners and food intake. But like crime rates or cars or shampoos, we often answer different questions than we think we’re answering. To think more clearly, we need to examine our proxies more carefully.
*It’s actually a kilocalorie of energy but we Americans refer to it as a calorie.
We’re all more or less familiar with the syllogism. The idea is that we can state premises – with certain rules – and draw conclusions that are logically valid. So we might say:
Major premise: All humans are mortal.
Minor premise: Travis is a human.
Conclusion: Therefore, Travis is mortal.
In this case, the syllogism is deemed valid because the conclusion flows logically from the premises. It’s also considered sound since both premises are demonstrably true. Since the syllogism is both valid and sound, the conclusion is irrefutable.
We often think in syllogisms though we typically don’t realize it. Here’s one that I go through each morning:
Major premise: People get up when the sun rises.
Minor premise: The sun is rising.
Minor premise: I’m a person.
Conclusion: Therefore, I need to get up.
I don’t usually think, “Oh good for me … another syllogism solved”. Rather, I just get out of bed.
We often associate syllogisms with logic but we can also use them for persuasion. Indeed, Aristotle identified a form of syllogism that he believed was more persuasive than any other form of logic.
Aristotle called it an enthymeme – it’s simply a syllogism with an unstated major premise. Since the major premise is assumed rather than stated, we don’t consider it consciously. We don’t ask ourselves, Is it valid? Is it sound? We just assume that everything is correct and get on with life.
Though they don’t use the terminology, advertisers long ago discovered that enthymemes are powerful persuaders. People who receive the message don’t consciously examine the premise. That’s exactly what advertisers want.
As an example, let’s dissect one of my favorite ads: the 2012 Volkswagen Passat ad featuring the kid in the Darth Vader costume. The kid wanders around the house trying to use “The Force” to turn on the TV, cook lunch, and so on. Of course, it never works. Then Dad comes home, parks his new Passat in the driveway, and turns it off. The kid uses the force to turn it back on. Dad recognizes what’s going on and uses his remote starter to start the car just as the kid hurls the force in the right direction. The car starts, the kid is amazed, and we all love the commercial.
So what’s the premise? Here’s how the ad works:
Major (hidden) premise: Car companies that produce loveable ads also
produce superior cars.
Minor premise: VW produced a loveable ad.
Conclusion: Therefore, VW produces superior cars.
When we think about the major premise, we realize that it’s illogical. The problem is that we don’t think about it. It enters our subconscious mind (System 1) rather than our conscious mind (System 2). We don’t examine it because we’re not aware of it.
Here’s another one. I’ve seen numerous ads in magazines that tout a product that’s also advertised on TV. The magazine ads often include the line: As Seen On TV. Here’s the enthymeme:
Major (hidden) premise: Products advertised on TV are superior to
those that aren’t advertised on TV.
Minor premise: This product is advertised on TV
Conclusion: Therefore, it’s a superior product.
When we consciously examine the premise, we realize that it’s ridiculous. The trick is to remind ourselves to examine the premise.
If you want to defend yourself against unscrupulous advertisers (or politicians), always be sure to ask yourself, What’s the hidden premise?
When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.
Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.
I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.
Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?
As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.
But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.
So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.
When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.
But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.
If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.
If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.