Ever since Richard Thaler and Cass Sunstein published Nudge in 2008, we’ve been debating the ethics and practicality of “nudging” people into making the “right” decisions.
Thaler and Sunstein mine the same intellectual vein worked by Daniel Kahneman, Amos Tversky, Dan Ariely, and Charles Duhigg. We may think we make rational decisions but we have biases, habits, and quirks that inject a degree of irrationality into every decision we make. While the other researchers explain how our heuristics work, Thaler and Sunstein give practical advice for nudging people toward rational decisions that serve their best interests.
Thaler and Sunstein refer to their work as “libertarian paternalism”. It sounds like an oxymoron but the basic idea is that you still have the right to screw up your life by making bad decisions. At the same time, we (whoever “we” is) will nudge you into making decisions that are good for you.
Most observers seem to have accepted that nudging by the government or by your employer is ethically acceptable. After all, it’s good for you, right? But there is a minority that objects to the paternalism inherent in the concept. For instance, Michael Beran writes that, “The authors of Nudge seem not to understand that the welfare of a people depends in part on their being free to choose badly. … Probably most people … can point to experiences where their mistakes proved fruitful. … Should we gradually foreclose the freedom to be stupid, we will almost certainly end up being less intelligent.”
So is nudging libertarian paternalism, as Thaler and Sunstein would have it or false benevolence, as Beran would put it? It seems like a debate that’s worth having … but not before we answer a more basic question: does nudging work? Why bother to debate the ethics of a concept if it doesn’t actually work?
So, does nudging work? We have a lot of anecdotal evidence that making one choice easier than another can nudge people in the “right” direction. For instance, if we want to encourage organ donations, we can offer people a choice to donate or not when they get their driver’s license. The driver’s bureau can structure the default option in one of two ways: 1) You’re not a donor unless you opt in; 2) You are a donor unless you opt out. It seems likely that Option 2 would nudge people in the right direction.
As we know, however, anecdotal evidence is very weak. We tend to make up stories that fit our preconceived notions. And Frank Pasquale, writing in The Atlantic, argues that nudges are often too weak to overcome ingrained behaviors. So, is there any controlled, randomized research that would answer a simple question: does nudging work?
Somewhat surprisingly, the first such research was published just last month in Science magazine. (Click here). John Bohannon, the article’s author, reports on 15 controlled trials that involved more than 100,000 people in the United States. The trials involved signing up for various government-supplied social services. In each case, some randomly selected participants were given the “standard application.” Other participants were given a “psychological nudge in which the information was presented slightly differently … for instance, … one choice was made easier than another.”
The results? “In 11 of the trials, the nudge modestly increased a person’s response rate or influenced them to make financially smarter decision.” Bohannon includes data on three of the trials, which moved decisions in the right direction by 2.9%, 3.6%, and 11%. As Bohannon notes, these are modest changes but the costs were probably low as well (no data were given on costs). If so, the cost-benefit may be positive as well.
We now have some solid evidence that nudges actually work. We don’t know the cost-benefit ratios but let’s assume for a moment that they’re positive. If so, the question becomes, should we encourage the government to use them? On the one hand, as Bohannon notes, businesses pay billions of dollars per year for their own nudges, known as advertising. Why shouldn’t the government participate on (closer to) equal footing? On the other hand, libertarians argue that it’s really another kind of nudge – toward the nanny state. I’ll write more about the debate in the future. In the meantime, what do you think?
When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.
Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.
I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.
Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?
As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.
But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.
So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.
When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.
But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.
If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.
If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.
Not long ago, I drove to my doctor’s office for a 10:00 AM appointment. To get there, I drove past the University of Denver and a local elementary school.
The university students were ambling off to their ten o’clock classes. They ambled randomly, crossing the street from different locations and at different angles. Rather then using the cross walks, they often stepped out from behind parked cars. I couldn’t guess where or when they might emerge from hiding and step directly into the path of my car.
The elementary students were also on a break but they were formed up in neat lines. The younger ones held hands in well-organized two-by-two columns. Teachers were in control and the kids only moved when directed by adults. Then they moved only in predictable fashion in predictable directions.
I thought, “Huh … the school kids are much better behaved than the college kids. The college kids should behave like the school kids, not the other way round. The college kids may be learning advanced, abstract concepts but they need to get back to the basics.”
A few days later, I had another think and asked a different question: Which set of kids induced better, safer behavior in me? Clearly, it was the college kids.
Here’s how it works. When I drove past the elementary school, I was aware that school was in session. I drove slowly and paid close attention to my surroundings. At the same time, however, it was clear that the kids were well behaved and under control. I could predict their behavior and I predicted that they would behave safely. I was aware of the situation but not overly concerned.
With the college students, on the other hand, I had no idea what they would do. They were behaving irrationally. Anything could happen. By the elementary school, I was aware. By the university, I was hyper-aware. I drove even more cautiously by the university than by the elementary school.
The college kids influenced my behavior by acting irrationally. As it happens, that’s a key element of game theory – as formulated by John Nash, the brilliant mathematician who was also haunted by mental illness (and who died recently in a traffic accident).
In game theory, if you don’t know what your opponent will do, you may circumscribe your own behavior. I didn’t know what the college students would do, so I drove extra carefully. I ruled out options that I might have considered if the college students were behaving more rationally and predictably.
In other words, acting irrationally is often a perfectly rational thing to do. I’m sure the college students didn’t consciously choose to act irrationally. But a crafty actor might well behave irrationally on purpose to limit her opponent’s options.
In fact, I think this school example perfectly explains the behavior of the finance ministers in the current Greek financial crisis. More on that tomorrow.