Over the past several years, I’ve written several articles about cognitive biases. I hope I have alerted my readers to the causes and consequences of these biases. My general approach is simple: forewarned is forearmed.
I didn’t realize that I was participating in a more general trend known as debiasing. As Wikipedia notes, “Debiasing is the reduction of bias, particularly with respect to judgment and decision making.” The basic idea is that we can change things to help people and organizations make better decisions.
What can we change? According to A User’s Guide To Debiasing, we can do two things:
I’ve been using a Type 1 approach. I’ve aimed at modifying the decision maker by providing information about the source of biases and describing how they skew our perception of reality. We often aren’t aware of the nature of our own perception and judgment. I liken my approach to making the fish aware of the water they’re swimming in. (To review some of my articles in this domain, click here, here, here, and here).
What does a Type 2 approach look like? How do we modify the environment? The general domain is called choice architecture. The idea is that we change the process by which the decision is made. The book Nudge by Richard Thaler and Cass Sunstein is often cited as an exemplar of this type of work. (My article on using a courtroom process to make corporate decisions fits in the same vein).
How important is debiasing in the corporate world? In 2013, McKinsey & Company surveyed 770 corporate board members to determine the characteristics of a high-performing board. The “biggest aspiration” of high-impact boards was “reducing decision biases”. As McKinsey notes, “At the highest level, boards look inward and aspire to more ‘meta’ practices—deliberating about their own processes, for example—to remove biases from decisions.”
More recently, McKinsey has written about the business opportunity in debiasing. They note, for instance, that businesses are least likely to question their core processes. Indeed, they may not even recognize that they are making decisions. In my terminology, they’re not aware of the water they’re swimming in. As a result, McKinsey concludes “…most of the potential bottom-line impact from debiasing remains unaddressed.”
What to do? Being a teacher, I would naturally recommend training and education programs as a first step. McKinsey agrees … but only up to a point. McKinsey notes that many decision biases are so deeply embedded that managers don’t recognize them. They swim blithely along without recognizing how the water shapes and distorts their perception. Or, perhaps more frequently, they conclude, “I’m OK. You’re Biased.”
Precisely because such biases frequently operate in System 1 as opposed to System 2, McKinsey suggests a program consisting of both training and structural changes. In other words, we need to modify both the decision maker and the decision environment. I’ll write more about structural changes in the coming weeks. In the meantime, if you’d like a training program, give me a call.
Ever since Richard Thaler and Cass Sunstein published Nudge in 2008, we’ve been debating the ethics and practicality of “nudging” people into making the “right” decisions.
Thaler and Sunstein mine the same intellectual vein worked by Daniel Kahneman, Amos Tversky, Dan Ariely, and Charles Duhigg. We may think we make rational decisions but we have biases, habits, and quirks that inject a degree of irrationality into every decision we make. While the other researchers explain how our heuristics work, Thaler and Sunstein give practical advice for nudging people toward rational decisions that serve their best interests.
Thaler and Sunstein refer to their work as “libertarian paternalism”. It sounds like an oxymoron but the basic idea is that you still have the right to screw up your life by making bad decisions. At the same time, we (whoever “we” is) will nudge you into making decisions that are good for you.
Most observers seem to have accepted that nudging by the government or by your employer is ethically acceptable. After all, it’s good for you, right? But there is a minority that objects to the paternalism inherent in the concept. For instance, Michael Beran writes that, “The authors of Nudge seem not to understand that the welfare of a people depends in part on their being free to choose badly. … Probably most people … can point to experiences where their mistakes proved fruitful. … Should we gradually foreclose the freedom to be stupid, we will almost certainly end up being less intelligent.”
So is nudging libertarian paternalism, as Thaler and Sunstein would have it or false benevolence, as Beran would put it? It seems like a debate that’s worth having … but not before we answer a more basic question: does nudging work? Why bother to debate the ethics of a concept if it doesn’t actually work?
So, does nudging work? We have a lot of anecdotal evidence that making one choice easier than another can nudge people in the “right” direction. For instance, if we want to encourage organ donations, we can offer people a choice to donate or not when they get their driver’s license. The driver’s bureau can structure the default option in one of two ways: 1) You’re not a donor unless you opt in; 2) You are a donor unless you opt out. It seems likely that Option 2 would nudge people in the right direction.
As we know, however, anecdotal evidence is very weak. We tend to make up stories that fit our preconceived notions. And Frank Pasquale, writing in The Atlantic, argues that nudges are often too weak to overcome ingrained behaviors. So, is there any controlled, randomized research that would answer a simple question: does nudging work?
Somewhat surprisingly, the first such research was published just last month in Science magazine. (Click here). John Bohannon, the article’s author, reports on 15 controlled trials that involved more than 100,000 people in the United States. The trials involved signing up for various government-supplied social services. In each case, some randomly selected participants were given the “standard application.” Other participants were given a “psychological nudge in which the information was presented slightly differently … for instance, … one choice was made easier than another.”
The results? “In 11 of the trials, the nudge modestly increased a person’s response rate or influenced them to make financially smarter decision.” Bohannon includes data on three of the trials, which moved decisions in the right direction by 2.9%, 3.6%, and 11%. As Bohannon notes, these are modest changes but the costs were probably low as well (no data were given on costs). If so, the cost-benefit may be positive as well.
We now have some solid evidence that nudges actually work. We don’t know the cost-benefit ratios but let’s assume for a moment that they’re positive. If so, the question becomes, should we encourage the government to use them? On the one hand, as Bohannon notes, businesses pay billions of dollars per year for their own nudges, known as advertising. Why shouldn’t the government participate on (closer to) equal footing? On the other hand, libertarians argue that it’s really another kind of nudge – toward the nanny state. I’ll write more about the debate in the future. In the meantime, what do you think?