In their book, Decisive, the Heath brothers write that there are four major villains of decision making.
Narrow framing – we miss alternatives and options because we frame the possibilities narrowly. We don’t see the big picture.
Confirmation bias – we collect and attend to self-serving information that reinforces what we already believe. Conversely, we tend to ignore (or never see) information that contradicts our preconceived notions.
Short-term emotion – we get wrapped up in the dynamics of the moment and make premature commitments.
Overconfidence – we think we have more control over the future than we really do.
A recent article in the McKinsey Quarterly notes that many “bad choices” in business result not just from bad luck but also from “cognitive and behavioral biases”. The authors argue that executives fall prey to their own biases and may not recognize when “debiasing” techniques need to be applied. In other words, executives (just like the rest of us) make faulty assumptions without realizing it.
Though the McKinsey researchers don’t reference the Heath brothers’ book, they focus on two of the four villains: the confirmation bias and overconfidence. They estimate that these two villains are involved in roughly 75 percent of corporate decisions.
The authors quickly summarize a few of the debiasing techniques – premortems, devil’s advocates, scenario planning, war games etc. – and suggest that these are quite appropriate for the big decisions of the corporate world. But what about everyday, bread-and-butter decisions? For these, the authors suggest a quick checklist approach is more appropriate.
The authors provide two checklists, one for each bias. The checklist for confirmation bias asks questions like (slightly modified here):
Have the decision-makers assembled a diverse team?
Have they discussed their proposal with someone who would certainly disagree with it?
Have they considered at least one plausible alternative?
The checklist for overconfidence includes questions like these:
What are the decision’s two most important side effects that might negatively affect its outcome? (This question is asked at three levels of abstraction: 1) inside the company; 2) inside the company’s industry; 3) in the macro-environment).
Answering these questions leads to a matrix that suggests the appropriate course of action. There are four possible outcomes:
Decide – “the process that led to [the] decision appears to have included safeguards against both confirmation bias and overconfidence.”
Reach out – the process has been tested for downside risk but may still be based on overly narrow assumptions. To use the Heath brothers’ terminology, the decision makers should widen their options with techniques like the vanishing option test.
Stress test – the decision process probably overcomes the confirmation bias but may depend on overconfident assumptions. Decision makers need to challenge these assumptions using techniques like premortems and devil’s advocates.
Reconsider – the decision process is open to both the conformation bias and overconfidence. Time to re-boot the process.
The McKinsey article covers much of the same territory covered by the Heath brothers. Still, it provides a handy checklist for recognizing biases and assumptions that often go unnoticed. It helps us bring subconscious biases to conscious attention. In Daniel Kahneman’s terminology, it moves the decision from System 1 to System 2. Now let’s ask the McKinsey researchers to do the same for the two remaining villains: narrow framing and short-term emotion.
When I worked for Lawson in Sweden, we ran a leadership development program called the Greenhouse (or Växthuset in Swedish). Students participated in yearlong projects that stretched their skills and got them out of their comfort zones. They swapped roles, did stints in different departments, and spent a lot of time with customers, including some disgruntled ones.
The students were overwhelming positive about the program. But I always wondered if it was effective. After all, it’s not so hard to get good evaluations from students.
Did we really develop better leaders? Or did we just select natural leaders and show them a good time for a year? And, if they were better leaders, what were they better at? Were they better, for instance, at acquiring and integrating other companies (which was part of our strategy)? Or did they improve their ability to promote innovation? Or did they improve their ability to implement financial controls during economic turbulence? Or what?
Could they really be better at all these things? Which ones were most important? I thought about all this as I read a recent McKinsey report, titled “Why Leadership Development Programs Fail”. According to McKinsey, they fail for four reasons. Let’s look at each.
1) Overlooking context – McKinsey calls it context but I call it strategy. Many leadership programs are decoupled from company strategies. McKinsey notes, for instance, that programs often “…rest on the assumption that one size fits all…” and consist of “…a long list of leadership standards…” and “…corporate values statements.”
McKinsey suggests that managers ask a simple question, “What, precisely, is this program for?” For instance, one of Lawson’s strategies was to target specific verticals (and ignore others). So the Greenhouse curriculum helped students understand how to identify and develop verticals. On the other hand, we probably didn’t give enough emphasis to acquisitions, another part of our strategy.
The key is to ask what skills are needed to execute the strategy successfully. There may be only two or three. Make sure the students work specifically on those behaviors.
2) Decoupling reflection from real work – retreats are nice but they don’t change behaviors. Indeed, McKinsey estimates that adults retain only 10% of what they learn in lectures. Instead of lectures, focus on workshops, collaborative projects, and exercises that require students to learn by doing.
We created fictional exercises for the Greenhouse. The students, for instance, had to “sell” a major contract to a “customer” who was using a competitor’s software. That’s not bad but, of course, real projects are better.
3) Underestimating mind-sets – to become good leaders, students will often need to learn new skills and change their behavior. Yet, changing behavior is one of the most difficult teaching objectives imaginable.
Let’s say, for instance, that your strategy is to decentralize authority and push decision-making outward and downward. Your leadership program should reflect this. But, if the student’s mind-set is I have to be in control at all times, you’ll need to change the mind-set before you can change the behavior.
How do you change mind-sets? Remember the broccoli tip. If you can convince a kid to eat broccoli, you can convince a manager to delegate effectively.
4) Failing to measure results – leadership development programs are often exciting (and even fun) and so they get high marks from the participants. But that’s not the point. More importantly, what happens to the participants after they leave the program? Do they rise in the ranks? Do they depart for other companies?
You’ll still have the selection versus value-added conundrum. Did the program actually add value or did you simply select natural leaders to include in the program? The only way to sort his out is to put a few randomly selected in the mix and see how they fare.
I have a lot of faith in leadership development programs. The bottom line: tailor the program to your strategy. If you don’t have a strategy, work on that first.
I recently designed a new pair of running shoes. I went to the NikeID website and configured approximately a dozen different components to create shoes in my custom colors. Nike assembled the shoes and delivered them to my doorstep in about a week. The price was roughly five percent higher than a standard shoe in standard colors at the mall.
Why did I design my own shoes? Well, partially because I could. Nike’s website has easy-to-use tools that let me mix and match colors until I got exactly what I wanted. I chose to go gaudy because that seems to be the trend today. The downside – Suellen is somewhat embarrassed to be seen with me. The upside – they’re great conversation starters, especially with women. Men, not so much.
Note that I only configured my new shoes. I didn’t custom build them. The shoe is an Air Pegasus, a standard, off-the-shelf model. I couldn’t design my own tread pattern or insole, for instance. Nor could I fool with the sizes. My feet are not the same size and I’d love to order shoes that fit each foot. Not yet. All I could do was take a standard template in a standard size and customize the colors.
I chose Nikes because I know exactly what size to order. I’m sure that all the athletic shoe companies have similar configurators but I’m leery about ordering the wrong size. I’ll probably stick with Nikes until better sizing systems come along.
According to McKinsey, in their latest report on mass customization, I probably won’t have to wait long. In fact, I should be able to do more customizing and less configuring in a number of product categories. Here are some examples drawn from the McKinsey report.
Sizing and virtual try-ons – several companies, including Styku and Constrvct, will scan your body (or parts of it) to create very precise 3D models. The system then projects the model on your computer and you “try on” clothes, shoes, etc. (Suellen used a simple version of this when ordering her Warby Parker glasses).
Create your own products – Starbucks allows you to create your own coffee drinks at frappuccino.com. At Adagio Teas, you can create your own tea that the company ships to you. You can keep it a secret (Travis Tea?) or offer it to the public. If other people order it, you get points toward future purchases. At Shoes of Prey, you can configure virtually any kind of shoe, not just running shoes. In all these examples, you get what you want but the vendor also gets more than just a sale. They also get valuable insights into market trends and what’s popular where.
Recommendation engines – I’m used to Amazon suggesting that if I like Book X, I’ll also enjoy Book Y. That’s nice. But, how about a recommendation engine that will help me build a new product? Why couldn’t the NikeID system tell me, for instance, that red and orange don’t go well together? McKinsey highlights a company called Chocri, that helps you build custom chocolate bars. If you create a bar with strawberry bits, it will recommend complementary flavors like cinnamon, almonds, or edible gold flakes. Who knew?
So, we’ll have much more choice in the near future. With flexible software to manage inventories and programmable robots to assemble the goods, custom-made products probably won’t cost much more than today’s off-the-shelf goods. That should create clear benefits for both suppliers and customers. It won’t be long. In fact, we may see many of the solutions even before I wear out my gaudy new shoes.
When I joined Lawson Software in 2006, it was merging with another software company, Intentia. Both companies had roughly 2,000 employees. Lawson was headquartered in St. Paul, Minnesota and 95% of its revenues came from the United States. Intentia was based in Stockholm and 95% of its revenues came from Europe. We were merging two regional companies to create one global company.
I was responsible for merging the two marketing departments. I found employees scattered all over the globe and no shortage of ideas on how we should integrate and communicate the “new Lawson”. I met people as quickly as I could and asked for suggestions on how to build an internal infrastructure, integrate marketing operations, share the best ideas from around world, and re-brand the combined company.
One idea I heard frequently was, “Let’s launch a wiki.” So we did. Marketing staff could share ideas and get acquainted through a global marketing wiki. We found some free software, organized it minimally, and launched it. I sent out an e-mail endorsing the idea and inviting everyone to participate.
Then I stood aside. I was familiar with the traditional wisdom that, “The surest way to stifle innovation is to let the boss speak first.” So, instead of participating actively, I let others take the lead. I assumed that was the best way to bring out creative ideas that we could mash up into effective innovations.
The wiki launched with a surge of enthusiasm. But it didn’t last long. Some people dominated the conversations; other people shied away. Conversational threads spun off into infinity. I thought the wiki would be an excellent place to share ideas and processes. As it evolved, mostly it was a place to share opinions – sometimes heatedly. Ultimately, it sank into irrelevance.
I could have saved a lot of time and effort, if only I had had McKinsey’s new report on social media as an internal integrator. McKinsey notes that many companies have invested in social media for external use but that, “…internal applications have barely begun to tap their full potential.” McKinsey also notes that companies could unlock up to $1.3 trillion in annual value through “products and services that enable social interactions in the digital realm.”
How to unlock that value? McKinsey suggests we follow four basic principles:
Add value, not complexity – the best social technologies “… become central to the organization and complement (or, ideally, substitute for) existing processes.” Our little wiki at Lawson was one more thing to do, not one less.
Provide essential organizational support – identify your objectives, select a technology, and then figure out what support and encouragement is needed to make it work. One of McKinsey’s clients used “Connections Geniuses” to encourage people to use the social technology. At Lawson, I thought the wiki was simple enough for everyone to use. Actually, it probably was but I forgot the need for ongoing encouragement.
Experiment and learn – as Tom Kelley points out in another context, treat everything as an experiment. Experiments only fail if you don’t learn anything from them. As our wiki sank into obscurity, we just let it go rather than learning from it and trying again.
Track impact and evolve metrics – when you’re experimenting, you’re not always sure what the best metrics are. Don’t jump the gun and commit prematurely – you may be measuring the wrong thing. Rather, let the experiment proceed and then decide how best to measure its impact.