Strategy. Innovation. Brand.

levels of evidence

Business School And The Swimmer’s Body Fallacy

He's tall because he plays basketball.

He’s tall because he plays basketball.

Michael Phelps is a swimmer. He has a great body. Ian Thorpe is a swimmer. He has a great body. Missy Franklin is a swimmer. She has a great body.

If you look at enough swimmers, you might conclude that swimming produces great bodies. If you want to develop a great body, you might decide to take up swimming. After all, great swimmers develop great bodies.

Swimming might help you tone up and trim down. But you would also be committing a logical fallacy. Known as the swimmer’s body fallacy, it confuses selection criteria with results.

We may think that swimming produces great bodies. But, in fact, it’s more likely that great bodies produce top swimmers. People with great bodies for swimming – like Ian Thorpe’s size 17 feet – are selected for competitive swimming programs. Once again, we’re confusing cause and effect. (Click here for a good background article on swimmer’s body fallacy).

Here’s another way to look at it. We all know that basketball players are tall. But would you accept the proposition that playing basketball makes you tall? Probably not. Height is not malleable. People grow to a given height because of genetics and diet, not because of the sports they play.

When we discuss height and basketball, the relationship is obvious. Tallness is a selection criterion for entering basketball. It’s not the result of playing basketball. But in other areas, it’s more difficult to disentangle selection factors from results. Take business school, for instance.

In fact, let’s take Harvard Business School or HBS. We know that graduates of HBS are often highly successful in the worlds of business, commerce, and politics. Is that success due to selection criteria or to the added value of HBS’s educational program?

HBS is well known for pioneering the case study method of business education. Students look at successful (and unsuccessful) businesses and try to ferret out the causes. Yet we know that, in evidence-based medicine, case studies are considered to be very weak evidence.

According to medical researchers, a case study is Level 3 evidence on a scale of 1 to 4, where 4 is the weakest. Why is it so weak? Partially because it’s a sample of one.

It’s also because of the survivorship bias. Let’s say that Company A has implemented processes X, Y, and Z and been wildly successful. We might infer that practices X, Y, and Z caused the success. Yet there are probably dozens of other companies that also implemented processes X, Y, and Z and weren’t so successful. Those companies, however, didn’t “survive” the process of being selected for a B-school case study. We don’t account for them in our reasoning.

(The survivorship bias is sometimes known as the LeBron James fallacy. Just because you train like LeBron James doesn’t mean that you’ll play like him).

So we have some reasons to suspect the logical underpinnings of a case-base education method. So, let’s revisit the question: Is the success of HBS graduates due to selection criteria or to the results of the HBS educational program? HBS is filled with brilliant professors who conduct great research and write insightful papers and books. They should have some impact on students, even if they use weak evidence in their curriculum. Shouldn’t they? Being a teacher, I certainly hope so. If so, then the success of HBS graduates is at least partially a result of the educational program, not just the selection criteria.

But I wonder …

How Do You Know If Something Is True?

True or FalseI used to teach research methods. Now I teach critical thinking. Research is about creating knowledge. Critical thinking is about assessing knowledge. In research methods, the goal is to create well-designed studies that allow us to determine whether something is true or not. A well-designed study, even if it finds that something is not true, adds to our knowledge. A poorly designed study adds nothing. The emphasis is on design.

In critical thinking, the emphasis is on assessment. We seek to sort out what is true, not true, or not proven in our info-sphere. To succeed, we need to understand research design. We also need to understand the logic of critical thinking — a stepwise progression through which we can discover fallacies and biases and self-serving arguments. It takes time. In fact, the first rule I teach is “Slow down. Take your time. Ask questions. Don’t jump to conclusions.”

In both research and critical thinking, a key question is: how do we know if something is true? Further, how do we know if we’re being fair minded and objective in making such an assessment? We discuss levels of evidence that are independent of our subjective experience. Over the years, thinkers have used a number of different schemes to categorize evidence and evaluate its quality. Today, the research world seems to be coalescing around a classification of evidence that has been evolving since the early 1990s as part of the movement toward evidence-based medicine (EBM).

The classification scheme (typically) has four levels, with 4 being the weakest and 1 being the strongest. From weakest to strongest, here they are:

  • 4 — evidence from a panel of experts. There are certain rules about such panels, the most important of which is that it consists of more than one person. Category IV may also contain what are known as observational studies without controls.
  • 3 — evidence from case studies, observed correlations, and comparative studies. (It’s interesting to me that many of our business schools build their curricula around case studies — fairly weak evidence. I wonder if you can’t find a case to prove almost any point.)
  • 2 — quasi-experiments — well-designed but non-randomized controlled trials. You manipulate the independent variable in at least two groups (control and experimental). That’s a good step forward. Since subjects are not randomly assigned, however, a hidden variable could be the cause of any differences found — rather than the independent variable.
  • 1b — experiments — controlled trials with randomly assigned subjects. Random assignment isolates the independent variable. Any effects found must be caused by the independent variable. This is the minimum proof of cause and effect.
  • 1a — meta-analysis of experiments. Meta-analysis is simply research on research. Let’s say that researchers in your field have conducted thousands of experiments on the effects of using electronic calculators to teach arithmetic to primary school students. Each experiment is data point in a meta-analysis. You categorize all the studies and find that an overwhelming majority showed positive effects. This is the most powerful argument for cause-and-effect.

You might keep this guide in mind as you read your daily newspaper. Much of the “evidence” that’s presented in the media today doesn’t even reach the minimum standards of Level 4. It’s simply opinion. Stating opinions is fine, as long as we understand that they don’t qualify as credible evidence.

Pascal’s Wager and the Mediterranean Diet

Wanna bet?

Wanna bet?

I like to think of Blaise Pascal (1623 — 1662), the French mathematician, as the western world’s first practitioner of Twitter. His collected Pensées were brief, enigmatic thoughts about mathematics, religion, and philosophy. Collected after his death, they read like the tweets of the 17th century (though they were intended to be a much more comprehensive defense of religion).

In the Pensées, Pascal made his famous wager. We all bet with our lives on whether God exists or not. We can live as if God exists and practice the traditional forms and virtues of religion. Or we can do the opposite and ignore our religious duties, assuming that God does not exist. If we live as if God exists and we’re right then the rewards are infinite. If we’re wrong, the loss is finite — indeed it’s quite small. Thus, Pascal argues, it’s only rational to live a pious life. The wager is heavily stacked to that side.

In today’s world, we don’t spend much time wagering on God’s existence (perhaps we should) but we make lots of bets that are much like Pascal’s. The cumulative effects are enormous.

For instance, consider the Mediterranean diet. The diet — which features olive oil, nuts, vegetables, and fish but not much red meat — has been on our radar for a number of years now. Epidemiologists observed that people who live near the Mediterranean have a much lower rate of heart disease than would be expected. Maybe it’s the diet. Or maybe it’s something else, like religion, culture, family structure, heredity, etc.

So the evidence for the positive health effects of the diet was an observed correlation. We could see that the diet was correlated (inversely) to heart disease but we couldn’t be sure that it caused the lower rates. Maybe a hidden, third factor was in play. Still, we could make a version of Pascal’s wager: eat as if the Mediterranean diet does reduce the risk of heart disease. If we’re right, we live longer. If we’re wrong … well, we’ve missed out on a few tasty bacon cheeseburgers. Would you take the bet?

Last week, the level of evidence changed dramatically. A five-year, randomized Spanish study of nearly 7,5000 people was published. People who followed the Mediterranean diet had 30% fewer heart attacks, strokes, and deaths from heart disease than the control group. Since the study used the experimental method, we can now talk about cause and effect, not just correlation. Now will you take the bet?

Of course, there’s still some doubt about the results. It’s only one study; it hasn’t been replicated yet. It was conducted in Spain. Maybe we wouldn’t get the same results in America. Maybe there were methodological or measurement errors. Still, the evidence seems pretty strong and it points toward a version of Pascal’s classic wager.

We all make versions of Pascal’s wager every day but we rarely think about them. Perhaps it’s time that we pay more attention. Perhaps it’s time to think about what levels of evidence we need before we take the bet. Is correlation enough or do we need to prove cause and effect? Life is uncertain but perhaps we can make it more comfortable by thinking — and betting — logically. While you’re pondering that, I’m going to drizzle some olive oil over a bowl of walnuts. Drop by if you’re hungry.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives