Strategy. Innovation. Brand.

rsearch design

How Do You Know If Something Is True?

True or FalseI used to teach research methods. Now I teach critical thinking. Research is about creating knowledge. Critical thinking is about assessing knowledge. In research methods, the goal is to create well-designed studies that allow us to determine whether something is true or not. A well-designed study, even if it finds that something is not true, adds to our knowledge. A poorly designed study adds nothing. The emphasis is on design.

In critical thinking, the emphasis is on assessment. We seek to sort out what is true, not true, or not proven in our info-sphere. To succeed, we need to understand research design. We also need to understand the logic of critical thinking — a stepwise progression through which we can discover fallacies and biases and self-serving arguments. It takes time. In fact, the first rule I teach is “Slow down. Take your time. Ask questions. Don’t jump to conclusions.”

In both research and critical thinking, a key question is: how do we know if something is true? Further, how do we know if we’re being fair minded and objective in making such an assessment? We discuss levels of evidence that are independent of our subjective experience. Over the years, thinkers have used a number of different schemes to categorize evidence and evaluate its quality. Today, the research world seems to be coalescing around a classification of evidence that has been evolving since the early 1990s as part of the movement toward evidence-based medicine (EBM).

The classification scheme (typically) has four levels, with 4 being the weakest and 1 being the strongest. From weakest to strongest, here they are:

  • 4 — evidence from a panel of experts. There are certain rules about such panels, the most important of which is that it consists of more than one person. Category IV may also contain what are known as observational studies without controls.
  • 3 — evidence from case studies, observed correlations, and comparative studies. (It’s interesting to me that many of our business schools build their curricula around case studies — fairly weak evidence. I wonder if you can’t find a case to prove almost any point.)
  • 2 — quasi-experiments — well-designed but non-randomized controlled trials. You manipulate the independent variable in at least two groups (control and experimental). That’s a good step forward. Since subjects are not randomly assigned, however, a hidden variable could be the cause of any differences found — rather than the independent variable.
  • 1b — experiments — controlled trials with randomly assigned subjects. Random assignment isolates the independent variable. Any effects found must be caused by the independent variable. This is the minimum proof of cause and effect.
  • 1a — meta-analysis of experiments. Meta-analysis is simply research on research. Let’s say that researchers in your field have conducted thousands of experiments on the effects of using electronic calculators to teach arithmetic to primary school students. Each experiment is data point in a meta-analysis. You categorize all the studies and find that an overwhelming majority showed positive effects. This is the most powerful argument for cause-and-effect.

You might keep this guide in mind as you read your daily newspaper. Much of the “evidence” that’s presented in the media today doesn’t even reach the minimum standards of Level 4. It’s simply opinion. Stating opinions is fine, as long as we understand that they don’t qualify as credible evidence.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives