To answer the question, you’ll need to do a fair amount of research. You might dig through police reports, census data, city government publications, and so on. It’s a lot of work.
But our brains don’t like to work. As Daniel Kahneman writes, “Thinking is to humans as swimming is to cats. They can do it, but they prefer not to.”
So, instead of answering the original question, we substitute a simpler question: How much crime can I remember in my neighborhood?
If we can remember a lot of crime – if it’s top of mind — we’ll guess that our neighborhood has a high crime rate. If we can’t remember much crime, we’ll guess that we have a low crime rate. We use our memory as a proxy for reality. It’s simple and probably not wholly wrong. It’s good enough.
Let me ask you another simple question: How dangerous is coronavirus?
It’s a tough question. We can’t possibly know the “right” answer. Even the experts can’t figure it out. So, how does our mind work on a tough question like this?
First, we use our memory as a proxy for reality. How top of mind is coronavirus? How available is it to our memory? (This, as you might guess, is known as the availability bias). Our media is saturated with stories about coronavirus. We see it every day. It’s easy to recall from memory. Must be a big deal.
Second, the media will continue to focus on coronavirus for several more months (at least). In the beginning, the media focused on the disease itself. Now, the media is more likely to focus on secondary effects – travel restrictions, quarantines, etc. Soon, the media will focus on reactions to the virus. Protesters will march on Washington demanding immediate action to protect us. The media will cover it.
The media activity is known as an availability cascade. The story keeps cascading into new stories and new angles on the same old story. The cascade keeps the story top of mind. It remains easily available to us. When was the last time we had a huge availability cascade? Think back to 2014 and the Ebola crisis. Sound familiar?
Third, our minds will consider how vivid the information is. How scary is it? How creepy? We remember vicious or horrific crimes much better than we remember mundane crimes like Saturday night stickups. How vivid is coronavirus? We see pictures everyday of workers in hazmat suits. It’s vivid.
Fourth, what are other people doing? When we don’t know how to act in a given situation, we look for cues from our fellow humans. What do we see today? Pictures of empty streets and convention centers. We read that Chinatown in New York is empty of tourists. People are afraid. If they’re afraid, we probably should be, too.
Fifth, how novel is the situation? We’re much more afraid of devils we don’t know than of devils that we do know. The coronavirus – like the Ebola virus before it – is new and, therefore, unknowable. Health experts can reassure us but, deep in our heart of hearts, we know that nobody knows. We can easily imagine that it’s the worst-case scenario. It could be the end of life as we know it.
Sixth, is it controllable? We want to think that we can control the world around us. We study history because we think that knowing the past will help us control the future. If something scary is out of our control, we will spare no expense to bring it back under control. Even a small scare – like the Three Mile Island incident – can produce a huge reaction. At times, it seems that the cure may be worse than the disease.
What to do? First, let’s apply some contextual thinking – both current and historical.
So, what to do? You’re much more likely to succumb to plain old ordinary flu than you are to be infected by coronavirus. So, get a flu shot. Then do what the old British posters from World War II told us all to do: Keep calm and carry on.
When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.
Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.
I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.
Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?
As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.
But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.
So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.
When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.
But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.
If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.
If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.
I speak Spanish reasonably well but I find it very tiring … which suggests that I probably think more clearly and ethically in Spanish than in English.
Like so many things, it’s all related to our two different modes of thinking: System1 and System 2. System 1 is fast and efficient and operates below the level of consciousness. It makes a great majority of our decisions, typically without any input from our conscious selves. We literally make decisions without knowing that we’re making decisions.
System 2 is all about conscious thought. We bring information into System 2, think it through, and make reasoned decisions. System 2 uses a lot of calories; it’s hard work. As Daniel Kahneman says, “Thinking is to humans as swimming is to cats; they can do it but they’d prefer not to.”
English, of course, is my native language. (American English, that is). It’s second nature to me. It’s easy and fluid. I can think in English without thinking about it. In other words, English is the language of my System 1. At this point in my life, it’s the only language in my System 1 and will probably remain so.
To speak Spanish, on the other hand, I have to invoke System 2. I have to think about my word choice, pronunciation, phrasing, and so on. It’s hard work and wears me out. I can do it but I would have to live in Spain for a while for it to become easy and fluid. (That’s not such a bad idea, is it?)
You may remember that System 1 makes decisions using heuristics or simple rules of thumb. System 1 simplifies everything and makes snap judgments. Most of the time, those judgments are pretty good but, when they’re wrong, they’re wrong in consistent ways. System 1, in other words, is the source of biases that we all have.
To overcome these biases, we have to bring the decision into System 2 and consider it rationally. That takes time, effort, and energy and, oftentimes, we don’t do it. It’s easy to conclude that someone is a jerk. It’s more difficult to invoke System 2 to imagine what that person’s life is like.
So how does language affect all this? I can only speak Spanish in my rational, logical, conscious System 2. When I’m thinking in Spanish, all my rational neurons are firing. I tend to think more carefully, more thoughtfully, and more ethically. It’s tiring.
When I think in English, on the other hand, I could invoke my System 2 but I certainly don’t have to. I can easily use heuristics in English but not in Spanish. I can jump to conclusions in English but not in Spanish.
The seminal article on this topic was published in 2012 by three professors from the University of Chicago. They write, “Would you make the same decisions in a foreign language as you would in your native tongue? It may be intuitive that people would make the same choices regardless of the language they are using…. We discovered, however, that the opposite is true: Using a foreign language reduces decision-making biases.”
So, it’s true: I’m a better person in Spanish.
In my critical thinking classes, students get a good dose of heuristics and biases and how they affect the quality of our decisions. Daniel Kahneman and Amos Tversky popularized the notion that we should look at how people actually make decisions as opposed to how they should make decisions if they were perfectly rational.
Most of our decision-making heuristics (or rules of thumb) work most of the time but when they go wrong, they do so in predictable and consistent ways. For instance, we’re not naturally good at judging risk. We tend to overestimate the risk of vividly scary events and underestimate the risk of humdrum, everyday problems. If we’re aware of these biases, we can account for them in our thinking and, perhaps, correct them.
Finding that our economic decisions are often irrational rather than rational has created a whole new field, generally known as behavioral economics. The field ties together concepts as diverse as the availability bias, the endowment effect, the confirmation bias, overconfidence, and hedonic adaptation to explain how people actually make decisions. Though it’s called economics, the basis is psychology.
So does this mean that traditional, rational, statistical, academic decision-making is dead? Well, not so fast. According Justin Fox’s article in a recent issue of Harvard Business Review, there are at least three philosophies of decision-making and each has its place.
Fox acknowledges that, “The Kahneman-Tversky heuristics-and-biases approach has the upper hand right now, both in academia and in the public mind.” But that doesn’t mean that it’s the only game in town.
The traditional, rational, tree-structured logic of formal decision analysis hasn’t gone away. Created by Ronald Howard, Howard Raiffa, and Ward Edwards, Fox argues that the classic approach is best suited to making “Big decisions with long investment horizons and reliable data [as in] oil, gas, and pharma.” Fox notes that Chevron is a major practitioner of the art and that Nate Silver, famous for accurately predicting the elections of 2012, was using a Bayesian variant of the basic approach.
And what about non-rational heuristics that actually do work well? Let’s say, for instance, that you want to rationally allocate your retirement savings across N different investment options. Investing evenly in each of the N funds is typically just as good as any other approach. Know as the 1/N approach, it’s a simple heuristic that leads to good results. Similarly, in choosing between two options, selecting the one you’re more familiar with usually creates results that are no worse than any other approach – and does so more quickly and at much lower cost.
Fox calls this the “effective heuristics” approach or, more simply, the gut-feel approach. Fox suggests that this is most effective, “In predictable situations with opportunities for learning, [such as] firefighting, flying, and sports.” When you have plenty of practice in a predictable situation, your intuition can serve you well. In fact, I’d suggest that the famous (or infamous) interception at the goal line in this year’s Super Bowl resulted from exactly this kind of thinking.
And where does the heuristics-and-biases model fit best? According to Fox, it helps us to “Design better institutions, warn ourselves away from dumb mistakes, and better understand the priorities of others.”
So, we have three philosophies of decision-making and each has its place in the sun. I like the heuristics-and-biases approach because I like to understand how people actually behave. Having read Fox, though, I’ll be sure to add more on the other two philosophies in upcoming classes.
We went to the airport the other day and realized that we were out of cash. I stopped at an ATM, pulled out a credit card, and froze. I rarely use that particular card at ATMs and I had completely forgotten the personal identification number. I stared blankly at the ATM screen for a few minutes and then slowly started to walk away.
A few seconds later, the number popped into my head: 2061. I’m used to having things pop into my head as I “give up” on a problem. When I focus on a problem, I block out information. As I start to unfocus, useful information pops back into my head. I find that I’m much less creative when I’m intently focused. (Recently, for instance, Steven Wright popped into my head.)
Pleased that my mind was working so effectively, I returned to the ATM, inserted my card and the digits 2061. Wrong. Hmmm … perhaps I transposed some digits. I tried various combinations: 2601, 2106, 1206, and so on. Nothing worked.
So again, I walked slowly away from the terminal. As I did, I noticed that I was standing next to an airport conference room. The number on the door: 2061. My System 1 had picked up the number subconsciously. It wasn’t a useful data point so System 1 didn’t register it with System 2. Then my System 2 broadcast a message: “We’re looking for a four digit number.” At that point, System 1 produced the most recent four-digit number it was aware of: 2061.
Unfortunately, it was the wrong number. But I was convinced it was the right number. It popped into my head just the way I expected it to.
Was my mind playing tricks on me? Not really. In David Brooks’ phrase, my “millions of little scouts” were out surveying my environment. One scout sent back some information that might be useful, the number 2061. The little scout was trying to help. Unfortunately, he led me astray. System 1 is usually right. But when it’s wrong, it can get you into big trouble. Like getting your credit card cancelled