Let’s do an experiment. We’ll randomly select 50,000 people from around the United States. Then we’ll assign them — also randomly — to two groups of 25,000 each. Let’s call them Groups X and Y. We’ll require that every member of Group X must smoke at least three packs of cigarettes per day. Members of Group Y, on the other hand, must never smoke or even be exposed to second hand smoke. We’ll follow the two groups for 25 years and monitor their health. We’ll then announce the results and advise people on the health implications of smoking.
I’ve just described a pretty good experiment. We manipulate the independent variable — in this case, smoking — to identify how it affects the dependent variable — in this case, personal health. We randomize the two groups so we’re sure that there’s no hidden variable. If we find that X influences Y, we can be sure that it’s cause and effect. It can’t be that Y causes X. Nor can it be that Z causes both X and Y. It has to be that X causes Y. There’s no other explanation.
The experimental method is the gold standard of causation. It’s the only way to prove cause and effect beyond a shadow of a doubt. Yet, it’s also a very difficult standard to implement. Especially in cases involving humans, the ethical questions often prevent a true experimental method. Could we really do the experiment I described above? Would you be willing to be assigned to Group X?
The absence of good experiments can confuse our public policy. For many years, the tobacco industry claimed that no one had ever conclusively proven that smoking caused cancer in humans. That’s because we couldn’t ethically run experiments on humans. We could show a correlation between smoking and cancer. But the tobacco industry claimed that correlation is not causation. There could be some hidden variable, Z, that caused people to take up smoking and also caused cancer. Smoking is voluntary; it’s a self-selected group. It could be — the industry argued — that whatever caused you to choose to smoke also caused your cancer.
While we couldn’t run experiments on humans, we did run experiments on animals. We did essentially what I described above but substituted animals for humans. We proved — beyond a shadow of a doubt — that smoking caused cancer in animals. The tobacco industry replied that animals are different from humans and, therefore, we had proven nothing about human health.
Technically, the tobacco industry was right. Correlation doesn’t prove causation. Animal studies don’t prove that the same effects will occur in humans. For years, the tobacco industry gave smokers an excuse: nobody has ever proven that smoking causes cancer.
Yet, in my humble opinion, the evidence is overwhelming that smoking causes cancer in humans. Given the massive settlements of the past 20 years, apparently the courts agree with me. That raises an intriguing question: what are the rules of evidence when we can’t run an experiment? When we can’t run an experiment to show that X causes Y, how do we gather data — and how much data do we need to gather — to decide policy and business issues? We may not be able to prove something beyond a shadow of a doubt, but are there common sense rules that allow us to make common sense decisions? I can’t answer all these questions today but, for me, these questions are the essence of critical thinking. I’ll be writing about them a lot in the coming months.
Did you know that the sale of ice cream is strongly correlated to the number of muggings in a given locale? Could it be that consuming ice cream leads us to attack our fellow citizens? Or perhaps miscreants in our midst mug strangers to get the money to buy ice cream? We have two variables, X and Y. Which one causes which? In this case, there’s a third variable, Z, that causes both X and Y. It’s the temperature. As the temperature rises, we buy more ice cream. At the same time, more people are wandering about out of doors, even after dark, making them convenient targets for muggers.
What causes what? It’s the most basic question in science. It’s also an important question for business planning. Lowering our prices will cause sales to rise, right? Maybe. Similarly, government policies are typically based on notions of cause and effect. Lowering taxes will cause the economy to boom, right? Well… it’s complicated. Let’s look at some examples where cause and effect are murky at best.
Home owners commit far fewer crimes proportionally than people who don’t own homes. Apparently, owning a home makes you a better citizen. Doesn’t it follow that the government should promote home ownership? Doing so should result in a safer, saner society, no? Well… maybe not. Again, we have two variables, X and Y. Which one causes which? Could it be that people who don’t commit crimes are in a better position to buy homes? That not committing crimes is the cause and home ownership is the result? The data are completely tangled up so it’s hard to prove conclusively one way or the other. But it seems at least possible that good citizenship leads to home ownership rather than vice versa. Or maybe, like ice cream and muggings, there’s a hidden variable, Z, that causes both.
The crime rate in the United States began to fall dramatically in the early 1990s. I’ve heard four different reasons for this. Which one do you think is the real cause?
Which of the four variables actually caused the declining crime rate in America? A lot is riding on the answer. Unfortunately, the data are so tangled up that it’s difficult to tell what causes what. But here are some rules for thinking about correlation and causation:
Actually, the only way to prove cause and effect beyond a shadow of a doubt, is the experimental method. Which leads us to our question for tomorrow: does smoking cause cancer in humans?