Strategy. Innovation. Brand.

cause and effect

Effect and Cause

Is it clean yet?

Is it clean yet?

I worry about cause and effect. If you get them backwards, you wind up chasing your tail. While you’re at it, you can create all kinds of havoc.

Take MS (please). We have long thought of multiple sclerosis as an autoimmune disease. The immune system interprets myelin – the fatty sheath around our nerves – as a threat and attacks it. As it eats away the myelin, it also impairs our ability to send signals from our brain to our limbs. The end result is often spasticity or even paralysis.

We don’t know the cause but the effect is clearly the malfunctioning immune system. Or maybe not. Some recent research suggests that a bacterium may be involved. It may be that the immune system is reacting appropriately to an infection. The myelin is simply an innocent bystander, collateral damage in the antibacterial attack.

The bacterium involved is a weird little thing. It’s difficult to spot. But it’s especially difficult to spot if you’re not looking for it. We may have gotten cause and effect reversed and been looking for a cure in all the wrong places. If so, it’s a failure of imagination as much as a failure of research. (Note that the bacterial findings are very preliminary, so let’s continue to keep our imaginations open).

Here’s another example: obsessive compulsive disorder. In a recent article, Claire Gillan argues that we may have gotten cause and effect reversed. She summarizes her thesis in two simple sentences: “Everybody knows that thoughts cause actions which cause habits. What if this is the wrong way round?”

As Gillan notes, we’ve always assumed that OCD behaviors were the effect. It seemed obvious that the cause was irrational thinking and, especially, fear. We’re afraid of germs and, therefore, we wash our hands obsessively. We’re afraid of breaking our mother’s back and, therefore, we avoid cracks in the sidewalk. Sometimes our fears are rooted in reality. At other times, they’re completely delusional. Whether real or delusional, however, we’ve always assumed that our fears caused our behavior, not the other way round.

In her research on OCD behavior, Gillan has made some surprising discoveries. When she induced new habits in volunteers, she found that people with OCD change their beliefs to explain the new habit. In other words, behavior is the cause and belief is the effect.

Traditional therapies for OCD have sought to address the fear. They aimed to change the way people with OCD think. But perhaps traditional therapists need to change their own thinking. Perhaps by changing the behaviors of people with OCD, their thinking would (fairly naturally) change on its own.

This is, of course, quite similar to the idea of confabulation. With confabulation, we make up stories to explain the world around us. It gives us a sense of control. With OCD – if Gillan is right – we make up stories to explain our own behavior. This, too, gives us a sense of control.

Now, if we could just get cause and effect straight, perhaps we really would have some control.

My iPod Is Conscious

Speak, wise one.

Speak, wise one.

Apparently my iPod is a sentient being. It senses its surroundings, understands context, and makes intelligent decisions.

Here’s the latest example. Yesterday, we received this week’s edition of The New Yorker. The cover features a couple kissing on the 59th Street Bridge. This morning, at the gym, my iPod randomly selected (from more than 4,000 choices) the 59th Street Bridge Song, the goopy old standard by Simon & Garfunkel. Even more eerily, the lyrics told me to “Slow down, you’re moving too fast…” which was exactly what I needed to do on the exercise machine I was using.

Clearly, my iPod knew about the magazine (the print edition!) and also knew that I was over-exerting myself. It selected the perfectly appropriate song from thousands of possibilities. Thank you, Steve Jobs.

But wait … really? Clearly the magazine’s cover art primed me to think about the 59th Street Bridge. When I heard the song, I made the connection. That’s the effect of priming. As for the advice on slowing down … well, I wouldn’t have noticed it if I weren’t overdoing it. In other words, I was being primed (or conditioned) in two different ways. I noticed things that I wouldn’t otherwise have noticed. I assumed there was a connection, but it was really just a coincidence.

Coincidences can screw up our thinking in myriad ways. Let’s look at four ways to consider coincidences and causes:

A) It’s a coincidence and we recognize it as such – most people would conclude that my iPod is not conscious … Apple’s not that good. We correctly conclude that it’s not a cause-and-effect situation.

B) It’s a coincidence but we thinks it’s a cause – this is where we can get into big trouble and deep debates. This is a problem in any discipline — like economics, sociology or climate science — where it’s difficult to run experiments. It’s hard to pin down if X causes Y or if Y causes X or … well, maybe it’s just a coincidence. (Maybe it was the rats).

C) It’s a cause and we recognize it as such – we know that certain germs cause certain diseases. So we take appropriate precautions.

D) It’s a cause but we think it’s a coincidence – before the 19th century, we didn’t recognize that germs caused diseases. We thought it was just a coincidence that people died in filthy places.

I suspect that many conspiracy theories stem from Category B. We note a coincidence and assume mistakenly that it’s a cause. The dust bowl in the United States coincided with over farming and also with the rise of communism in Europe. A small but noisy group of people concluded that the dust bowl was not caused by farming techniques and drought but was actually a communist conspiracy.

We can also suffer from Category D problems. I read recently of a man who had a chronic infection in his right ear. Doctors couldn’t figure it out. Finally, the man took some earwax from his left (healthy) ear and stuck it in his right ear. The infection went away. It seemed coincidental that his left ear was healthy while his right ear was not but it actually pointed to a cause. His left ear had healthy bacteria (a healthy microbiome) while his right ear did not. The man suspected that the difference between his left and right ears was not coincidental. He was right and solved a Category D problem.

In a weird way, this all ties back to innovation. If we want to stimulate innovation, we can usefully ask questions like this: “I note that A and B vary coincidentally. Is that really a coincidence or does it point to some deeper cause that we can capitalize on?” While Category B can generate endless debates, Category D could generate novel solutions.

(How do you know if something is true? Click here.)

How Do You Know If Something Is True?

True or FalseI used to teach research methods. Now I teach critical thinking. Research is about creating knowledge. Critical thinking is about assessing knowledge. In research methods, the goal is to create well-designed studies that allow us to determine whether something is true or not. A well-designed study, even if it finds that something is not true, adds to our knowledge. A poorly designed study adds nothing. The emphasis is on design.

In critical thinking, the emphasis is on assessment. We seek to sort out what is true, not true, or not proven in our info-sphere. To succeed, we need to understand research design. We also need to understand the logic of critical thinking — a stepwise progression through which we can discover fallacies and biases and self-serving arguments. It takes time. In fact, the first rule I teach is “Slow down. Take your time. Ask questions. Don’t jump to conclusions.”

In both research and critical thinking, a key question is: how do we know if something is true? Further, how do we know if we’re being fair minded and objective in making such an assessment? We discuss levels of evidence that are independent of our subjective experience. Over the years, thinkers have used a number of different schemes to categorize evidence and evaluate its quality. Today, the research world seems to be coalescing around a classification of evidence that has been evolving since the early 1990s as part of the movement toward evidence-based medicine (EBM).

The classification scheme (typically) has four levels, with 4 being the weakest and 1 being the strongest. From weakest to strongest, here they are:

  • 4 — evidence from a panel of experts. There are certain rules about such panels, the most important of which is that it consists of more than one person. Category IV may also contain what are known as observational studies without controls.
  • 3 — evidence from case studies, observed correlations, and comparative studies. (It’s interesting to me that many of our business schools build their curricula around case studies — fairly weak evidence. I wonder if you can’t find a case to prove almost any point.)
  • 2 — quasi-experiments — well-designed but non-randomized controlled trials. You manipulate the independent variable in at least two groups (control and experimental). That’s a good step forward. Since subjects are not randomly assigned, however, a hidden variable could be the cause of any differences found — rather than the independent variable.
  • 1b — experiments — controlled trials with randomly assigned subjects. Random assignment isolates the independent variable. Any effects found must be caused by the independent variable. This is the minimum proof of cause and effect.
  • 1a — meta-analysis of experiments. Meta-analysis is simply research on research. Let’s say that researchers in your field have conducted thousands of experiments on the effects of using electronic calculators to teach arithmetic to primary school students. Each experiment is data point in a meta-analysis. You categorize all the studies and find that an overwhelming majority showed positive effects. This is the most powerful argument for cause-and-effect.

You might keep this guide in mind as you read your daily newspaper. Much of the “evidence” that’s presented in the media today doesn’t even reach the minimum standards of Level 4. It’s simply opinion. Stating opinions is fine, as long as we understand that they don’t qualify as credible evidence.

Does Smoking Cause Cancer in Humans?

You can’t prove nothing.

Let’s do an experiment. We’ll randomly select 50,000 people from around the United States. Then we’ll assign them — also randomly — to two groups of 25,000 each. Let’s call them Groups X and Y. We’ll require that every member of Group X must smoke at least three packs of cigarettes per day. Members of Group Y, on the other hand, must never smoke or even be exposed to second hand smoke. We’ll follow the two groups for 25 years and monitor their health. We’ll then announce the results and advise people on the health implications of smoking.

I’ve just described a pretty good experiment. We manipulate the independent variable — in this case, smoking — to identify how it affects the dependent variable — in this case, personal health. We randomize the two groups so we’re sure that there’s no hidden variable. If we find that X influences Y, we can be sure that it’s cause and effect. It can’t be that Y causes X. Nor can it be that Z causes both X and Y. It has to be that X causes Y. There’s no other explanation.

The experimental method is the gold standard of causation. It’s the only way to prove cause and effect beyond a shadow of a doubt. Yet, it’s also a very difficult standard to implement. Especially in cases involving humans, the ethical questions often prevent a true experimental method. Could we really do the experiment I described above? Would you be willing to be assigned to Group X?

The absence of good experiments can confuse our public policy. For many years, the tobacco industry claimed that no one had ever conclusively proven that smoking caused cancer in humans. That’s because we couldn’t ethically run experiments on humans. We could show a correlation between smoking and cancer. But the tobacco industry claimed that correlation is not causation. There could be some hidden variable, Z, that caused people to take up smoking and also caused cancer. Smoking is voluntary; it’s a self-selected group. It could be — the industry argued — that whatever caused you to choose to smoke also caused your cancer.

While we couldn’t run experiments on humans, we did run experiments on animals. We did essentially what I described above but substituted animals for humans. We proved — beyond a shadow of a doubt — that smoking caused cancer in animals. The tobacco industry replied that animals are different from humans and, therefore, we had proven nothing about human health.

Technically, the tobacco industry was right. Correlation doesn’t prove causation. Animal studies don’t prove that the same effects will occur in humans. For years, the tobacco industry gave smokers an excuse: nobody has ever proven that smoking causes cancer.

Yet, in my humble opinion, the evidence is overwhelming that smoking causes cancer in humans. Given the massive settlements of the past 20 years, apparently the courts agree with me. That raises an intriguing question: what are the rules of evidence when we can’t run an experiment? When we can’t run an experiment to show that X causes Y, how do we gather data — and how much data do we need to gather — to decide policy and business issues? We may not be able to prove something beyond a shadow of a doubt, but are there common sense rules that allow us to make common sense decisions? I can’t answer all these questions today but, for me, these questions are the essence of critical thinking. I’ll be writing about them a lot in the coming months.

Ice Cream and Muggings

Feel like mugging someone?

Did you know that the sale of ice cream is strongly correlated to the number of muggings in a given locale? Could it be that consuming ice cream leads us to attack our fellow citizens? Or perhaps miscreants in our midst mug strangers to get the money to buy ice cream? We have two variables, X and Y. Which one causes which? In this case, there’s a third variable, Z, that causes both X and Y. It’s the temperature. As the temperature rises, we buy more ice cream. At the same time, more people are wandering about out of doors, even after dark, making them convenient targets for muggers.

What causes what? It’s the most basic question in science. It’s also an important question for business planning. Lowering our prices will cause sales to rise, right? Maybe. Similarly, government policies are typically based on notions of cause and effect. Lowering taxes will cause the economy to boom, right? Well… it’s complicated. Let’s look at some examples where cause and effect are murky at best.

Home owners commit far fewer crimes proportionally than people who don’t own homes. Apparently, owning a home makes you a better citizen. Doesn’t it follow that the government should promote home ownership? Doing so should result in a safer, saner society, no? Well… maybe not. Again, we have two variables, X and Y. Which one causes which? Could it be that people who don’t commit crimes are in a better position to buy homes? That not committing crimes is the cause and home ownership is the result? The data are completely tangled up so it’s hard to prove conclusively one way or the other. But it seems at least possible that good citizenship leads to home ownership rather than vice versa. Or maybe, like ice cream and muggings, there’s a hidden variable, Z, that causes both.

The crime rate in the United States began to fall dramatically in the early 1990s. I’ve heard four different reasons for this. Which one do you think is the real cause?

  1. Legalized abortion — in 1973, the Supreme Court effectively legalized abortion in the United States. Eighteen years later, the crime rate began to fall precipitously. Coincidence?
  2. The “broken windows” theory of policing — police traditionally focused on serious crime while ignoring petty crimes. In the 1980s, sociologists began to argue that ignoring petty crime sent a signal to would-be criminals that citizens will tolerate crime in a given area. Even minor crimes like broken windows could send the wrong message. Police adopted the idea and started cracking down on petty crimes. The message? If minor crimes are not tolerated, just think what they’ll do for bigger crimes!
  3. The aging population — we’re getting older. Young people commit a disproportionate number of crimes, especially violent crimes. As our nation ages, we become more sedate.
  4. The “get tough” sentencing movement — politicians in the 1980s began to sponsor legislation to “get tough” on crime by imposing longer, mandatory sentences. One result has been a dramatic rise in our prison population. (In fact, I read recently that the U.S. has 700 people incarcerated for every 100,000 citizens. In Sweden, the equivalent rate is 70 prisoners. Could it be that we’re ten times more criminal than Swedes? Swedes are blonde and they don’t commit crimes. Cause and effect, right? Perhaps we should all dye our hair blonde.)

Which of the four variables actually caused the declining crime rate in America? A lot is riding on the answer. Unfortunately, the data are so tangled up that it’s difficult to tell what causes what. But here are some rules for thinking about correlation and causation:

  • If you think X cause Y, always ask the reverse question. Is it possible that Y caused X?
  • Always look for a hidden variable, Z, that could cause both X and Y.

Actually, the only way to prove cause and effect beyond a shadow of a doubt, is the experimental method. Which leads us to our question for tomorrow: does smoking cause cancer in humans?

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives