Strategy. Innovation. Brand.

heuristics

Don’t Call Me Surely

Surely, this isn't happening

Surely, this isn’t happening

I’ve written before about mental shortcuts called heuristics – rules of thumb that help us make a great majority of our decisions. Most of the time they work brilliantly. We recognize patterns subconsciously and make smart decisions automatically. Our conscious brain is free to deal with more difficult tasks. It’s like automatic pilot or highway hypnosis.

While heuristics help us get through the day, they can also make serious mistakes. (For a catalog of mistakes, click here, here, here, and here). It’s a fairly long list and surely most of us have made most of them. The more we’re aware of our heuristics, the more we can teach ourselves to avoid the errors inherent in automatic thinking. We can learn how to not trick ourselves.

But what about when someone else tries to trick us? Just like heuristics, the tools of disputation can lead us to believe things that just aren’t true (or disbelieve things that are true). You can always look to the evidence but you may be presented with evidence that’s biased or simply wrong. However, just as you can learn to recognize the ways that you deceive yourself, you can also learn to recognize the ways others deceive you.

Peter Facione can identify 17 ways in which our own heuristics can deceive us. On the other hand, Richard Paul and Linda Elder can identify 44 “foul ways to win an argument”. Surely the fact that there are so many more ways to deceive others (as opposed to ourselves) indicates that humans are innately devious. We’re not to be trusted.

Over time, I plan to catalog all 44 methods of deception so we can compare the ways we trick others to the ways we trick ourselves. I wonder which causes which. Do others deceive us or do they simply confirm our own self-deceptions?

So, where to start? Well, I’ve already slipped the first one by you. It’s what I call the handily hidden assumption and is often preceded by the word “surely”. When a person says, “Surely you believe…” or “Surely we can agree…”, it’s time to suspect trickery. The person is trying to hide an assumption so you won’t question it. The conclusion may flow logically from the assumption, but the assumption may be fatally flawed. If you ignore the assumption, the argument may sound logical and convincing.

In logic, this is known as begging the question. In common parlance, we often use begging the question to mean raising the question. In fact, however, it means exactly the opposite. It really means that the question goes begging. The question of whether the assumption is correct goes begging. Nobody pays it the respect it deserves.

So be careful when you’re debating a point with a slippery sophist. Or, for that matter, when you’re reading my website. And don’t call me surely.

 

 

When You Have a Heart Attack, Don’t Do It in a Crowd

What emergency?

What emergency?

Some years ago, Suellen, Elliot, and I flew from Los Angeles to Sydney, Australia — a long, somewhat dreary, overnight flight with several hundred people on a jumbo jet. The flight was smooth and uneventful with one bizarre exception. About six hours into the flight — as we were all trying to sleep — the plane’s oxygen masks suddenly deployed and fell into our laps. Nothing seemed wrong. There was no noise or bumping or vibration or swerving. Just oxygen masks in our laps. We woke up, looked around at the other passengers, concluded that nothing was wrong … and ignored the masks.

It turned out that we were right. The pilot announced that someone had “pushed the wrong button” in the cockpit and released the masks. He advised us to ignore the masks which we were already successfully doing. Later in the flight, I spoke with a flight attendant who told me she was shocked that none of the passengers had followed the “proper procedures” and donned the masks. I said that it seemed obvious that it wasn’t an emergency. She asked, “How did you know that?” I said, “By looking at the other passengers. Nobody was scared.”

I was reminded of this incident as I was re-reading (yet again) a chapter in Robert Cialdini’s book, Influence. Our little adventure on the airplane was a classic example of social proof. When we’re in an ambiguous situation and not sure what’s happening, one of the first things we do is to look at other people. If they’re panicking, then maybe we should too. If they’re calm, we can relax.

Cialdini points out that social proof affects us even when we’re aware of it. The example? Laugh tracks on TV. We all claim to dislike laugh tracks and also claim that they have no effect on us. But experimental research suggests otherwise. When people watch a TV show with a laugh track, they laugh longer and harder than other people watching the same show without the track. We realize that we’re being manipulated but we still succumb. According to Cialdini, the effect is more pronounced with bad jokes than with good ones. If so, Seth McFarlane clearly needed a laugh track at this year’s Academy Awards.

Cialdini refers to one of the problems of social proof as pluralistic ignorance. I looked around at other people on the airplane and they seemed calm and unfazed. At the same time, they were looking at me and I seemed … well, calm and unfazed. As I looked at them, I thought, “No need to get excited”. As they looked at me, they thought the same. None of us knew what was really going on but we were influencing each other to ignore a potentially life-threatening emergency.

Cialdini argues that pluralistic ignorance makes “safety in numbers” meaningless. (See also my post on the risky shift). Cialdini cites research on staged emergencies — a person apparently has an epileptic seizure. The person is helped “…85 percent of the time when there was a single bystander present but only 31 percent of the time with five bystanders present.” A single bystander seems to assume “if in doubt, help out”. Multiple bystanders look at each other and conclude that there’s no emergency.

So, what to do? If you have to have a heart attack, do it when only one other person is around.

Sinkholes, Icy Roads, and Chenesium

This is nothing. I'm much more worried about sinkholes.

This is nothing. I’m much more worried about sinkholes.

Last week a man was swallowed by a sinkhole while sleeping in Florida. This week, I’m more worried about sinkholes in Florida than I am about driving on icy roads in Colorado. Is that logical?

It’s not logical but it’s very real. Sometimes a story is so vivid, so unexpected, so emotionally fraught, and so available that it dominates our thinking. Even though it’s extremely unlikely, it becomes possible, maybe even probable in our imaginations. As Daniel Kahneman points out, “The world in our heads is not a precise replica of reality.”

What makes a phenomenon more real in our heads than it is in reality? Several things. It’s vivid — it creates a very clear image in our mind. It’s creepy — the vivid image is unpleasant and scary. It’s a “bad” death as opposed to a “good” death. We read about deaths every day. When we read about a kindly old nonagenarian dying peacefully after a lifetime of doing good works, it seems natural and honorable. It’s a good death. When we read about someone killed in the prime of life in bizarre or malevolent circumstances, it’s a “bad” death. A bad death is much more vivid than a good death.

But what really makes an image dominate our minds is availability. How easy is it to bring an instance to mind? If the thought is readily available to us, we deem it to be likely. What’s readily available? Anything that’s in the popular media and the topic of discussion with friends and colleagues. If your colleagues around the water cooler say, “Hey, did you hear about the guy in the sinkhole?” you’ve already begun to blow it out of proportion.

Availability can also compound itself in what Kahneman calls an availability cascade. The story itself becomes the story. Suppose that a suspicious compound — let’s call it chenesium — is found in the food supply. Someone writes that chenesium causes cancer in rats when administered in huge doses. Plus, it’s a vividly scary form of cancer — it affects the eyeballs and makes you look like a zombie. People start writing letters to the editor about the grave danger. Now it’s in the media. People start marching on state capitals, demanding action. The media write about the marching. People read about the marching and assume that where there’s smoke, there’s fire. The Surgeon General issues a statement saying the danger is minimal. But the populace — now worked into a frenzy — denounce her as a lackey of chenesium producers. Note that the media is no longer writing about chenesium. Rather, they’re writing about the controversy surrounding chenesium. The story keeps growing because it’s a good story. It’s a perfect storm.

So, what to do? Unfortunately, facts don’t matter a whole lot by this point. As Kahneman notes (quoting Jonathan Haidt), “The emotional tail wags the rational dog.” The only thing to do is to let it play out … sooner or later, another controversy will arise to take chenesium’s place.

At the personal level, we can spare ourselves a lot of worry by pondering the availability bias and remembering that facts do matter. We can look up the probability of succumbing to a sinkhole. If we do, we’ll realize that the danger is vanishingly small. There’s nothing to worry about. Still, I’m not going to Florida anytime soon.

Critical Thinking. What’s That?

No, thanks. I don't want my doc to mis-diagnose me.

No, thanks. I don’t want my doc to mis-diagnose me.

I visited my doctor the other day. Expecting a bit of a wait, I took along Critical Thinking, a textbook for one of my courses. When I walked into the doctor’s office, he read the title and said, “Hmmm … critical thinking. What’s that?” I thought, “My doc just asked me what critical thinking is. This can’t be a good sign.”

I quelled my qualms, however, and explained what I teach in critical thinking class. He brightened up immediately and said, “Oh, that’s just like How Doctors Think.” I pulled out my Kindle and downloaded the book immediately. Understanding how doctors think might actually help me get better medical care.

So how do they think? Well, first they use shortcuts. They generally have way too much information to deal with and use rules of thumb called heuristics. Sound familiar? I’ve written several articles about rules of thumb and how they can lead us astray. (Just look for “thumb” in this website’s Blog Search box). So, the first answer is that doctors think just like us. Is that a good thing? Here are some errors that doctors commonly make:

Representation error — the patient is a picture of health. It’s not likely that those chest pains are a cause for concern. With this error, the doctor identifies a prototype that represents a cluster of characteristics. If you fit the prototype, fine. If not, the doctor may be diagnosing the prototype rather than you.

Attribution error — this often happens with negative stereotypes. The patient is disheveled and smells of booze. Therefore, the tremors are likely caused by alcohol rather than a hereditary disease that causes copper accumulation in the liver. That may be right most of the time but when it’s wrong, it’s really wrong.

Framing errors — I’ve read the patient’s medical charts and I see that she suffers from XYZ. Therefore, we’ll treat her for XYZ. The medical record forms a frame around the patient. Sometimes, doctors forget to step outside the frame and ask about other conditions that might have popped up. Sometimes the best approach is simply to say, “Let me tell you my story.”

Confirmation bias — we see things that confirm our beliefs and don’t see (or ignore) things that don’t. We all do it.

Availability bias — if you’re the 7th patient I’ve seen today and the first six all had the flu, there’s a good chance that I’ll diagnose you with flu, too. It just comes to mind easily; it’s readily available.

Affective bias — the doctor’s emotions get in the way. Sometimes these are negative emotions. (Tip: if you think your doctor feels negatively about you, get a new doctor). But positive emotions can also be harmful. I like you and I don’t want to cause you pain. Therefore, I won’t order that painful, embarrassing test — the one that might just save your life.

Sickest patient syndrome — doctors like to succeed just like anyone else does. With very sick patients, they may subconsciously conclude that they can’t be successful … and do less than their best.

The list goes on … but my space doesn’t. When I started  the book I thought it was probably written for doctors. But the author, Jerome Groopman, says it’s really for us laypeople. By understanding how doctors think, we can communicate more effectively with our physicians and help them avoid mistakes. It’s a good thought and a fun read.

Thinking Outside the Frame

Make way!

Make way!

An ambulance, racing to the hospital, siren blaring, approaches an intersection. At the same time, from a different direction, a fire truck, racing to a fire, approaches the same intersection. From a third direction, a police car screeches toward the same intersection, responding to a burglary-in-progress call. From a fourth direction, a U.S. Mail truck trundles along to the same intersection. All four vehicles arrive at the same time at the same intersection controlled by a four-way stop sign. Who has the right of way?

The way I just told this story sets a frame around it that may (or may not) guide your thinking. You can look at the story from inside the frame or outside it. If you look inside the frame, you’ll pursue the internal logic of the story. The three emergency vehicles are all racing to  save people — from injury, from fire, or from burglary. Which one of those is the worst case? Which one deserves to go first? It’s a tough call.

On the other hand, you could look at the story outside the frame. Instead of pursuing the internal logic, you look at the structure of the story. Rather than getting drawn into the story, you look at it from a distance. One of the first things you’ll notice is that three of the vehicles belong to the same category — emergency vehicles in full crisis mode. The fourth vehicle is different — it’s a mail truck. Could that be a clue? Indeed it is. The “correct” answer to this somewhat apocryphal story is that the mail truck has the right of way. Why? It’s a federal government vehicle and takes precedence over the other, local government vehicles.

In How Doctors Think, Jerome Groopman describes how doctors think inside the frame. A young woman is diagnosed with anorexia and bulimia. Many years later, she’s doing poorly and losing weight steadily. Her medical file is six inches thick. Each time she visits a new doctor, the medical file precedes her. The new doctor reads it, discovers that she’s bulimic and anorexic and treats her accordingly. Finally, a new doctor sets aside her record, pulls out a blank sheet of paper, looks at the woman and says, “Tell me your story.” In telling her own story, the woman gives important clues that leads to a new diagnosis — she’s gluten-intolerant. The new doctor stepped outside the frame of the medical record and gained valuable insights.

According to Franco Moretti, similar frames exist in literature — they’re called books. Traditional literary analysis demands that you read books and study them very closely. Moretti, an Italian literary scholar, calls this close reading — it’s studying literature inside the frame set by the book. Moretti advocates a different approach that he calls distant reading….”understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.” Only by stepping back and reading ourside the frame, can we understand “…the true scope and nature of literature.”

In each of these examples we have a frame. In the first story, I set the frame for you. It’s a riddle and I was trying to trick you. In the second story, the patient’s medical record creates the frame. In the third, the book sets the frame. In each case, we can enter the frame and study the problem closely or we can step back and observe the structure of the problem. It’s often a good idea to step inside the frame — after all, you usually do want your doctor to read your medical file. But it’s also useful to step outside the frame, where you can find clues that you would never find by studying the internal logic of the problem. In fact, I think this approach can help us understand “big” predictions like the cost of healthcare. More on that next Monday.

 

 

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives