Strategy. Innovation. Brand.

creativity and critical thinking

How Do You Know If Something Is True?

True or FalseI used to teach research methods. Now I teach critical thinking. Research is about creating knowledge. Critical thinking is about assessing knowledge. In research methods, the goal is to create well-designed studies that allow us to determine whether something is true or not. A well-designed study, even if it finds that something is not true, adds to our knowledge. A poorly designed study adds nothing. The emphasis is on design.

In critical thinking, the emphasis is on assessment. We seek to sort out what is true, not true, or not proven in our info-sphere. To succeed, we need to understand research design. We also need to understand the logic of critical thinking — a stepwise progression through which we can discover fallacies and biases and self-serving arguments. It takes time. In fact, the first rule I teach is “Slow down. Take your time. Ask questions. Don’t jump to conclusions.”

In both research and critical thinking, a key question is: how do we know if something is true? Further, how do we know if we’re being fair minded and objective in making such an assessment? We discuss levels of evidence that are independent of our subjective experience. Over the years, thinkers have used a number of different schemes to categorize evidence and evaluate its quality. Today, the research world seems to be coalescing around a classification of evidence that has been evolving since the early 1990s as part of the movement toward evidence-based medicine (EBM).

The classification scheme (typically) has four levels, with 4 being the weakest and 1 being the strongest. From weakest to strongest, here they are:

  • 4 — evidence from a panel of experts. There are certain rules about such panels, the most important of which is that it consists of more than one person. Category IV may also contain what are known as observational studies without controls.
  • 3 — evidence from case studies, observed correlations, and comparative studies. (It’s interesting to me that many of our business schools build their curricula around case studies — fairly weak evidence. I wonder if you can’t find a case to prove almost any point.)
  • 2 — quasi-experiments — well-designed but non-randomized controlled trials. You manipulate the independent variable in at least two groups (control and experimental). That’s a good step forward. Since subjects are not randomly assigned, however, a hidden variable could be the cause of any differences found — rather than the independent variable.
  • 1b — experiments — controlled trials with randomly assigned subjects. Random assignment isolates the independent variable. Any effects found must be caused by the independent variable. This is the minimum proof of cause and effect.
  • 1a — meta-analysis of experiments. Meta-analysis is simply research on research. Let’s say that researchers in your field have conducted thousands of experiments on the effects of using electronic calculators to teach arithmetic to primary school students. Each experiment is data point in a meta-analysis. You categorize all the studies and find that an overwhelming majority showed positive effects. This is the most powerful argument for cause-and-effect.

You might keep this guide in mind as you read your daily newspaper. Much of the “evidence” that’s presented in the media today doesn’t even reach the minimum standards of Level 4. It’s simply opinion. Stating opinions is fine, as long as we understand that they don’t qualify as credible evidence.

Sinkholes, Icy Roads, and Chenesium

This is nothing. I'm much more worried about sinkholes.

This is nothing. I’m much more worried about sinkholes.

Last week a man was swallowed by a sinkhole while sleeping in Florida. This week, I’m more worried about sinkholes in Florida than I am about driving on icy roads in Colorado. Is that logical?

It’s not logical but it’s very real. Sometimes a story is so vivid, so unexpected, so emotionally fraught, and so available that it dominates our thinking. Even though it’s extremely unlikely, it becomes possible, maybe even probable in our imaginations. As Daniel Kahneman points out, “The world in our heads is not a precise replica of reality.”

What makes a phenomenon more real in our heads than it is in reality? Several things. It’s vivid — it creates a very clear image in our mind. It’s creepy — the vivid image is unpleasant and scary. It’s a “bad” death as opposed to a “good” death. We read about deaths every day. When we read about a kindly old nonagenarian dying peacefully after a lifetime of doing good works, it seems natural and honorable. It’s a good death. When we read about someone killed in the prime of life in bizarre or malevolent circumstances, it’s a “bad” death. A bad death is much more vivid than a good death.

But what really makes an image dominate our minds is availability. How easy is it to bring an instance to mind? If the thought is readily available to us, we deem it to be likely. What’s readily available? Anything that’s in the popular media and the topic of discussion with friends and colleagues. If your colleagues around the water cooler say, “Hey, did you hear about the guy in the sinkhole?” you’ve already begun to blow it out of proportion.

Availability can also compound itself in what Kahneman calls an availability cascade. The story itself becomes the story. Suppose that a suspicious compound — let’s call it chenesium — is found in the food supply. Someone writes that chenesium causes cancer in rats when administered in huge doses. Plus, it’s a vividly scary form of cancer — it affects the eyeballs and makes you look like a zombie. People start writing letters to the editor about the grave danger. Now it’s in the media. People start marching on state capitals, demanding action. The media write about the marching. People read about the marching and assume that where there’s smoke, there’s fire. The Surgeon General issues a statement saying the danger is minimal. But the populace — now worked into a frenzy — denounce her as a lackey of chenesium producers. Note that the media is no longer writing about chenesium. Rather, they’re writing about the controversy surrounding chenesium. The story keeps growing because it’s a good story. It’s a perfect storm.

So, what to do? Unfortunately, facts don’t matter a whole lot by this point. As Kahneman notes (quoting Jonathan Haidt), “The emotional tail wags the rational dog.” The only thing to do is to let it play out … sooner or later, another controversy will arise to take chenesium’s place.

At the personal level, we can spare ourselves a lot of worry by pondering the availability bias and remembering that facts do matter. We can look up the probability of succumbing to a sinkhole. If we do, we’ll realize that the danger is vanishingly small. There’s nothing to worry about. Still, I’m not going to Florida anytime soon.

Can You Frame Yourself?

No, please. Not again!

No, please. Not again!

My mother was a great lady but not a great cook. TV dinners were a popular option at our house when I was a kid. If we weren’t eating TV dinners we might have to eat … frozen fish sticks. I can still smell the oily odor of limp fish sticks frying up in the little Sunbeam electric skillet. It permeated everything. I grew up in a clean-your-plate family so, ultimately, I had to choke down those mysterious fish parts. Then, without fail, I raced to the bathroom and threw up.

How does one think critically about such a situation? In our family, we quickly ruled out several non-causes. Everyone else in the family ate fish sticks and didn’t get sick. Therefore, it couldn’t be the fish sticks. Every serving of fish sticks made me sick, so we couldn’t blame it on just one box that had gone bad. Clearly, I must be allergic to fish.

So, from the age of about six to 23, I ate no fish at all. No trout or tuna or herring or salmon or swordfish. After college, I moved to Ecuador and, from time to time, took vacations to the beach. On one such vacation, I found that there was nothing to eat locally but fish. Finally, I sat in a restaurant, braced myself for the worst, and took a bite of fish. I thought, “Wow, this is really good!”  I wasn’t allergic to fish at all … just greasy, stinky frozen fish sticks.

I had been framing myself. I made an assumption about myself based on faulty evidence and stuck with it for almost 17 years. I never thought to re-check the original assumption or re-validate the evidence. I never tried to think outside the frame. Over the past several weeks, I’ve written about the issues of what might be called “external framing” in this blog. Here are some examples:

  • Police pick up a suspect in a crime. They’re pretty sure he did it so they’re especially attuned to any evidence that incriminates him. At the same time, they ignore evidence that might point to someone else. The suspect is framed even without malicious intent.
  • A doctor reads your medical records and discovers that you suffer from XYZ. So, he treats you for XYZ without listening to your current complaints. The medical records framed you and may have prevented the doctor from seeing the whole picture.

In these cases, one person is framing another. The same thing can happen to abstract issues. I’ve noticed that Republicans and Democrats frame the same issue in very different ways.

What we forget sometimes is that we can also frame ourselves. I used to teach a course in research methods that included a mild dose of inferential statistics. Many of my students were women in their 30s and 40s who were returning to school to re-start their careers. Many of them were very nervous about the statistics. They believed they weren’t good at math. As it turned out, they did just fine. They weren’t bad at math; they had just framed themselves into believing they were bad at math.

If you believe something about yourself, you might just want to poke at it a bit. Sometimes you may just be wrong. On other occasions, you may be right — I’m still terrible at anything having to do with music. Still, it’s worth asking the question. Otherwise, you may miss out on some very tasty fish.

Pascal’s Wager and the Mediterranean Diet

Wanna bet?

Wanna bet?

I like to think of Blaise Pascal (1623 — 1662), the French mathematician, as the western world’s first practitioner of Twitter. His collected Pensées were brief, enigmatic thoughts about mathematics, religion, and philosophy. Collected after his death, they read like the tweets of the 17th century (though they were intended to be a much more comprehensive defense of religion).

In the Pensées, Pascal made his famous wager. We all bet with our lives on whether God exists or not. We can live as if God exists and practice the traditional forms and virtues of religion. Or we can do the opposite and ignore our religious duties, assuming that God does not exist. If we live as if God exists and we’re right then the rewards are infinite. If we’re wrong, the loss is finite — indeed it’s quite small. Thus, Pascal argues, it’s only rational to live a pious life. The wager is heavily stacked to that side.

In today’s world, we don’t spend much time wagering on God’s existence (perhaps we should) but we make lots of bets that are much like Pascal’s. The cumulative effects are enormous.

For instance, consider the Mediterranean diet. The diet — which features olive oil, nuts, vegetables, and fish but not much red meat — has been on our radar for a number of years now. Epidemiologists observed that people who live near the Mediterranean have a much lower rate of heart disease than would be expected. Maybe it’s the diet. Or maybe it’s something else, like religion, culture, family structure, heredity, etc.

So the evidence for the positive health effects of the diet was an observed correlation. We could see that the diet was correlated (inversely) to heart disease but we couldn’t be sure that it caused the lower rates. Maybe a hidden, third factor was in play. Still, we could make a version of Pascal’s wager: eat as if the Mediterranean diet does reduce the risk of heart disease. If we’re right, we live longer. If we’re wrong … well, we’ve missed out on a few tasty bacon cheeseburgers. Would you take the bet?

Last week, the level of evidence changed dramatically. A five-year, randomized Spanish study of nearly 7,5000 people was published. People who followed the Mediterranean diet had 30% fewer heart attacks, strokes, and deaths from heart disease than the control group. Since the study used the experimental method, we can now talk about cause and effect, not just correlation. Now will you take the bet?

Of course, there’s still some doubt about the results. It’s only one study; it hasn’t been replicated yet. It was conducted in Spain. Maybe we wouldn’t get the same results in America. Maybe there were methodological or measurement errors. Still, the evidence seems pretty strong and it points toward a version of Pascal’s classic wager.

We all make versions of Pascal’s wager every day but we rarely think about them. Perhaps it’s time that we pay more attention. Perhaps it’s time to think about what levels of evidence we need before we take the bet. Is correlation enough or do we need to prove cause and effect? Life is uncertain but perhaps we can make it more comfortable by thinking — and betting — logically. While you’re pondering that, I’m going to drizzle some olive oil over a bowl of walnuts. Drop by if you’re hungry.

Critical Thinking. What’s That?

No, thanks. I don't want my doc to mis-diagnose me.

No, thanks. I don’t want my doc to mis-diagnose me.

I visited my doctor the other day. Expecting a bit of a wait, I took along Critical Thinking, a textbook for one of my courses. When I walked into the doctor’s office, he read the title and said, “Hmmm … critical thinking. What’s that?” I thought, “My doc just asked me what critical thinking is. This can’t be a good sign.”

I quelled my qualms, however, and explained what I teach in critical thinking class. He brightened up immediately and said, “Oh, that’s just like How Doctors Think.” I pulled out my Kindle and downloaded the book immediately. Understanding how doctors think might actually help me get better medical care.

So how do they think? Well, first they use shortcuts. They generally have way too much information to deal with and use rules of thumb called heuristics. Sound familiar? I’ve written several articles about rules of thumb and how they can lead us astray. (Just look for “thumb” in this website’s Blog Search box). So, the first answer is that doctors think just like us. Is that a good thing? Here are some errors that doctors commonly make:

Representation error — the patient is a picture of health. It’s not likely that those chest pains are a cause for concern. With this error, the doctor identifies a prototype that represents a cluster of characteristics. If you fit the prototype, fine. If not, the doctor may be diagnosing the prototype rather than you.

Attribution error — this often happens with negative stereotypes. The patient is disheveled and smells of booze. Therefore, the tremors are likely caused by alcohol rather than a hereditary disease that causes copper accumulation in the liver. That may be right most of the time but when it’s wrong, it’s really wrong.

Framing errors — I’ve read the patient’s medical charts and I see that she suffers from XYZ. Therefore, we’ll treat her for XYZ. The medical record forms a frame around the patient. Sometimes, doctors forget to step outside the frame and ask about other conditions that might have popped up. Sometimes the best approach is simply to say, “Let me tell you my story.”

Confirmation bias — we see things that confirm our beliefs and don’t see (or ignore) things that don’t. We all do it.

Availability bias — if you’re the 7th patient I’ve seen today and the first six all had the flu, there’s a good chance that I’ll diagnose you with flu, too. It just comes to mind easily; it’s readily available.

Affective bias — the doctor’s emotions get in the way. Sometimes these are negative emotions. (Tip: if you think your doctor feels negatively about you, get a new doctor). But positive emotions can also be harmful. I like you and I don’t want to cause you pain. Therefore, I won’t order that painful, embarrassing test — the one that might just save your life.

Sickest patient syndrome — doctors like to succeed just like anyone else does. With very sick patients, they may subconsciously conclude that they can’t be successful … and do less than their best.

The list goes on … but my space doesn’t. When I started  the book I thought it was probably written for doctors. But the author, Jerome Groopman, says it’s really for us laypeople. By understanding how doctors think, we can communicate more effectively with our physicians and help them avoid mistakes. It’s a good thought and a fun read.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives