Strategy. Innovation. Brand.


Intuition — Wicked and Kind

My intuition is good here.

My intuition is good here.

When I was climbing mountains regularly, I thought I had pretty good intuition. Even if I didn’t know quite why I was making a decision, I generally made pretty good decisions. I usually made conservative as opposed to risky decisions. Intuitively, I could reasonably judge whether a decision was too conservative, too risky, or just right.

When I was an executive, on the other hand, my intuition for business decisions was not especially good. I didn’t have a “feel” for the situation. In the mountains, I could “fly by the seat of my pants.” In the executive suite I needed reams and reams of analysis. I couldn’t even tell whether a decision was conservative or risky – it depended on how you defined the terms. As a businessman, I often longed for the certainty and confidence I felt in the mountains.

What’s the difference between the two environments? The mountains were kind; the executive suite was wicked.

The concepts of “kind” and “wicked” come from Robin Hogarth’s book, Educating Intuition. Hogarth’s central idea is that we can teach ourselves to become more intuitive and more insightful. We have some control over the process, but the environment – whether kind or wicked — also plays a critical role.

Where does intuition come from? I wasn’t born with the ability to make good decisions in the mountains. I must have learned it from my experiences and from my teachers. I never set a goal to become more intuitive. My goal was simply to enjoy myself safely in wilderness environments. Creating an intuitive sense of the wilderness was merely a byproduct.

But why would I be better at wilderness intuition than at business intuition? According to Hogarth, it has to do with the nature, quality, and speed of the feedback.

In the mountains, I often got immediate feedback on my decisions. I could tell within a few minutes whether I had made a good decision or not. At most, I had to wait for a day or two. The feedback was also unambiguous. I knew whether I had gotten it right or not.

In a certain way, however, mountain decisions were difficult to evaluate. The act of making a decision meant that I couldn’t make comparisons. Let’s say I chose Trail A as opposed to Trail B. Let’s also assume that Trail A led directly to the summit with minimal obstacles. I might conclude that I had made a good decision. But did I? Trail B might have been even better.

So, in Hogarth’s terminology, mountain decision-making was kind in that it was clear, quick, and unambiguous. It was less kind in that making one decision eliminated the possibility of making useful comparisons. Compare this, for instance, to predicting that it will rain tomorrow. Making the prediction doesn’t, in any way, reduce the quality of the feedback.

Now compare the mountain environment to the business environment. The business world is truly wicked. I might not get feedback for months or years. In the meantime, I may have made many other decisions that might influence the outcome.

The feedback is ambiguous as well. Let’s say that we achieve good results. Was that because of the decision I made or because of some extraneous, or even random factors? And, like Trail A versus Trail B, choosing one course of action eliminates the possibility of making meaningful comparison.

It’s no wonder that I had better intuition in the mountains than in the executive suite. With the exception of the Trail A/Trail B issue, the mountains are a kind environment. The business world, on the other hand, offers thoroughly wicked feedback.

Could I ever develop solid intuition in the wicked world of business? Maybe. I’ll write more on how to train your intuition in the near future.

McKinsey and The Decision Villains

Just roll the dice.

Just roll the dice.

In their book, Decisive, the Heath brothers write that there are four major villains of decision making.

Narrow framing – we miss alternatives and options because we frame the possibilities narrowly. We don’t see the big picture.

Confirmation bias – we collect and attend to self-serving information that reinforces what we already believe. Conversely, we tend to ignore (or never see) information that contradicts our preconceived notions.

Short-term emotion – we get wrapped up in the dynamics of the moment and make premature commitments.

Overconfidence – we think we have more control over the future than we really do.

A recent article in the McKinsey Quarterly notes that many “bad choices” in business result not just from bad luck but also from “cognitive and behavioral biases”. The authors argue that executives fall prey to their own biases and may not recognize when “debiasing” techniques need to be applied. In other words, executives (just like the rest of us) make faulty assumptions without realizing it.

Though the McKinsey researchers don’t reference the Heath brothers’ book, they focus on two of the four villains: the confirmation bias and overconfidence. They estimate that these two villains are involved in roughly 75 percent of corporate decisions.

The authors quickly summarize a few of the debiasing techniques – premortems, devil’s advocates, scenario planning, war games etc. – and suggest that these are quite appropriate for the big decisions of the corporate world. But what about everyday, bread-and-butter decisions? For these, the authors suggest a quick checklist approach is more appropriate.

The authors provide two checklists, one for each bias. The checklist for confirmation bias asks questions like (slightly modified here):

Have the decision-makers assembled a diverse team?

Have they discussed their proposal with someone who would certainly disagree with it?

Have they considered at least one plausible alternative?

The checklist for overconfidence includes questions like these:

What are the decision’s two most important side effects that might negatively affect its outcome? (This question is asked at three levels of abstraction: 1) inside the company; 2) inside the company’s industry; 3) in the macro-environment).

Answering these questions leads to a matrix that suggests the appropriate course of action. There are four possible outcomes:

Decide – “the process that led to [the] decision appears to have included safeguards against both confirmation bias and overconfidence.”

Reach out – the process has been tested for downside risk but may still be based on overly narrow assumptions. To use the Heath brothers’ terminology, the decision makers should widen their options with techniques like the vanishing option test.

Stress test – the decision process probably overcomes the confirmation bias but may depend on overconfident assumptions. Decision makers need to challenge these assumptions using techniques like premortems and devil’s advocates.

Reconsider – the decision process is open to both the conformation bias and overconfidence. Time to re-boot the process.

The McKinsey article covers much of the same territory covered by the Heath brothers. Still, it provides a handy checklist for recognizing biases and assumptions that often go unnoticed. It helps us bring subconscious biases to conscious attention. In Daniel Kahneman’s terminology, it moves the decision from System 1 to System 2. Now let’s ask the McKinsey researchers to do the same for the two remaining villains: narrow framing and short-term emotion.

Brian Williams and Core Priorities

Values clarification.

Values clarification.

In their book Decisive, Chip and Dan Heath write about the need to honor our core priorities when making decisions. They write that “An agonizing decision … is often a sign of a conflict among ‘core priorities’ … [T]hese are priorities that transcend the week or the quarter … [including] long-term goals and aspirations.”

To illustrate their point, the Heath brothers tell the story of Interplast*, the non-profit organization that recruits volunteer surgeons to repair cleft lips throughout the world. Interplast had some ”thorny issues” that caused contentious arguments and internal turmoil.

One seemingly minor issue was whether surgeons could take their families with them as they traveled to remote locations. The argument in favor: The surgeons were volunteering their time and vacations. It seems only fair to allow them to take their families. The argument against: The families distract the surgeon from their work and make it more difficult to train local doctors.

The argument was intense and divisive. Finally, one board member said to another, “You know, the difference between you and me is you believe the customer is the volunteer surgeon and I believe the customer is the patient.”

That simple statement led Interplast to re-examine and clarify its core priorities. Ultimately, Interplast’s executives resolved that the patient is indeed the center of their universe. Once that was clarified, the decision was no longer agonizing – surgeons should not take their families along.

I thought of Interplast as I read the coverage of Brian Williams’ situation at NBC. In much the same way as Interplast, NBC had to clarify its core priorities. The basic question is whom does NBC serve? Is it more loyal to Brian Williams or to its viewing audience?

In normal times, NBC doesn’t have to answer this question. It can support and promote its anchor while also serving its audience. In a crisis, however, NBC is forced to choose. It’s the moment of truth. Does the company support the man in whom it has invested so much? Or does it protect its credibility with the audience?

Ultimately, NBC sought to protect its credibility. I was struck by what Lester Holt said on his first evening on the air: “Now if I may on a personal note say it is an enormously difficult story to report. Brian is a member of our family, but so are you, our viewers and we will work every night to be worthy of your trust.”

Holt’s statement suggests to me that NBC’s core priority is credibility with the audience. I certainly respect that. It also struck me as being very similar to the question Interplast asked itself.

Clarifying your core priorities is never a simple task. Indeed, it may take a crisis to force the issue. But once you complete the task, everything else is simpler. As my father used to say: Decisions are easy when values are clear.

Here are links to two articles from the New York Times that report on how NBC executives reached their decisions regarding Brian Williams. (Click here and here)

*Interplast has been renamed ReSurge International. Its website is here.

Good Decisions, Bad Outcomes

Bad Decision or Bad Outcome?

Bad Decision or Bad Outcome?

The New England Patriots won the Super Bowl because their opponents, the Seattle Seahawks, made a bad decision. That’s what millions of sports fans in the United States believe this week.

But was it a bad decision or merely a bad outcome? We often evaluate whether a decision was good or bad based on the result. But Ronald Howard, the father of decision analysis, says that you need to separate the two. You can only judge whether a decision was good or bad by studying how the decision was made.

Howard writes that the outcome has nothing to do with the quality of the decision process. A good decision can yield a good result or a bad result. Similarly, a bad decision can generate a good or bad outcome. Don’t assume that the decision causes the result. It’s not so simple. Something entirely random or unforeseen can turn a good decision into a bad result or vice versa.

But, as my boss used to say, we only care about results. So why bother to study the decision process? We should study only what counts – the result of the process, not the process itself.

Well, … not so fast. Let’s say we make a decision based entirely on emotion and gut feel. Let’s also assume that things turn out just great. Did we make a good decision? Maybe. Or maybe we just got lucky.

During the penny stock market boom in Denver, I decided to invest $500 in a wildcat oil company whose stock was selling for ten cents a share. In two weeks, the stock tripled to 31 cents per share. I had turned my $500 stake into $1,500. I was a genius! (There’s a touch of confirmation bias here).

What’s wrong with this story? I assumed that I had made a good decision because I got a good outcome. I must be smart. But, really, I was just lucky. And you probably know the rest of the story. Assuming that I was a smart stock picker, I re-invested the $1,500 and – over the next six months – lost it all.

Today, when I evaluate stocks with the aim of buying something, I repeat a little mantra: “I am not a genius. I am not a genius.” It creates a much better decision process.

I was lucky and got a good outcome from a bad decision process. The Seattle Seahawks, on the other hand, got the opposite. From what I’ve read, they had a good process and made a good decision. They got a horrendous result. Even though their fans will vilify the Seahawks’ coaches, I wouldn’t change their decision process.

And that’s the point here. If you want good decisions, study the decision process and ignore the outcome. You’ll get better processes and better decisions. In the long run, that will tilt the odds in your favor. Chance favors the thoughtful decision maker.

What makes for a good decision process? I’ll write more about that in the coming weeks. In the meantime, you might like two video clips from Roch Pararye, a professor at the University of Pennsylvania, who explains why we separate decision processes from decision results. (Click here and here).

The Doctor Won’t See You Now

Shouldn't you be at a meeting?

Shouldn’t you be at a meeting?

If you were to have major heart problem – acute myocardial infarction, heart failure, or cardiac arrest — which of the following conditions would you prefer?

Scenario A — the failure occurs during the heavily attended annual meeting of the American Heart Association when thousands of cardiologists are away from their offices or;

Scenario B — the failure occurs during a time when there are no national cardiology meetings and fewer cardiologists are away from their offices.

If you’re like me, you’ll probably pick Scenario B. If I go into cardiac arrest, I’d like to know that the best cardiologists are available nearby. If they’re off gallivanting at some meeting, they’re useless to me.

But we might be wrong. According to a study published in JAMA Internal Medicine (December 22, 2014), outcomes are generally better under Scenario A.

The study, led by Anupam B. Jena, looked at some 208,000 heart incidents that required hospitalization from 2002 to 2011. Of these, slightly more than 29,000 patients were hospitalized during national meetings. Almost 179,000 patients were hospitalized during times when no national meetings were in session.

And how did they fare? The study asked two key questions: 1) how many of these patients died within 30 days of the incident? and; 2) were there differences between the two groups? Here are the results:

  • Heart failure – statistically significant differences – 17.5% of heart failure patients in Scenario A died within 30 days versus 24.8% in Scenario B. The probability of this happening by chance is less than 0.1%.
  • Cardiac arrest — statistically significant differences – 59.1% of cardiac arrest patients in Scenario A died within 30 days versus 69.4% in Scenario B. The probability of this happening by chance is less than 1.0%.
  • Acute myocardial infarction – no statistically significant differences between the two groups. (There were differences but they may have been caused by chance).

The general conclusion: “High-risk patients with heart failure and cardiac arrest hospitalized in teaching hospitals had lower 30-day mortality when admitted during dates of national cardiology meetings.”

It’s an interesting study but how do we interpret it? Here are a few observations:

  • It’s not an experiment – we can only demonstrate cause-and-effect using an experimental method with random assignment. But that’s impossible in this case. The study certainly demonstrates a correlation but doesn’t tell us what caused what. We can make educated guesses, of course, but we have to remember that we’re guessing.
  • The differences are fairly small – we often misinterpret the meaning of “statistically significant”. It sounds like we found big differences between A and B; the differences, after all, are “significant”. But the term refers to probability not the degree of difference. In this case, we’re 99.9% sure that the differences in the heart failure groups were not caused by chance. Similarly, we’re 99% sure that the differences in the cardiac arrest groups were not caused by chance. But the differences themselves were fairly small.
  • The best guess is overtreatment – what causes these differences? The best guess seems to be that cardiologists – when they’re not off at some meeting – are “overly aggressive” in their treatments. The New York Times quotes Anupam Jena: “…we should not assume … that more is better. That may not be the case.” Remember, however, that this is just a guess. We haven’t proven that overtreatment is the culprit.

It’s a good study with interesting findings. But what should we do about them? Should cardiologists change their behavior based on this study? Translating a study’s findings into policies and protocols is a big jump. We’re moving from the scientific to the political. We need a heavy dose of critical thinking. What would you do?

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup