Strategy. Innovation. Brand.

Building Your Baloney Detector

Baloney or bologna?

Baloney or bologna?

I like to tell wildly improbable stories with a very straight face. I don’t want to embarrass anyone but it’s fun to see how persuasive I can be. Friends know to look to Suellen for verification. With a subtle shift of her head, she lets them know what’s real and what’s not.

My little charades have taught me that many people will believe very unlikely stories. That includes me, even though I think I have a pretty good baloney detector. So how do you tell what’s true and what’s not? Here are some clues.

Provenance — one of the first steps is to assess the information’s source. Did it come from a reliable source? Is the source disinterested – that is, does he or she have no interest in how the issue is resolved? Was the information derived from a study or is it hearsay? Does the source have a hidden agenda?

Assess the information— Next, you need to assess the information itself. What are the assumptions that underlie the information? Are there any logical fallacies? What inferences can you draw? Always remember to ask about what’s left out. What are you not seeing? What’s not being presented? Here’s a good example.

Assess the facts — with the assessment phase, be sure to investigate the facts. Are they really facts? How do you know? Sometimes “facts” are not really factual. Here’s an example.

Definition – as you assess the information, you also need to think about definitions. Definitions are fundamental – if they’re wrong, everything else is called into question. A good definition is objective, observable, and reliable. Here’s a not-so-good definition: “He looked drunk.” Here’s a better one: “His blood alcohol reading was 0.09.” The best definitions are operational – you perform a consistent, observable operation to create the definition.

Interpretation – we now know something about the information – where it comes from, how it’s defined and so on. How much can we interpret from that? Are we building an inductive argument – from specific cases to general conclusions? Or is it a deductive argument – from general principles to specific conclusions?

Causality – causality is part of interpretation and it’s very slippery. If variables A and B behave similarly, we may conclude that A causes B. But it could be a random coincidence. Or perhaps variable C causes both A and B. Or maybe we’ve got it backwards and B causes A. The only way to prove cause-and-effect is through the experimental method. If someone tells you that A causes B but hasn’t run an experiment, you should be suspicious. (For more detail, click here and here).

Replicability – if a study is done once and points to a particular conclusion, that’s good (assuming the definitions are solid, the methodology is sound, etc.) If it’s done multiple times by multiple people in multiple locations, that’s way better. Here’s a scale that will help you sort things out.

Statistics and probability – you don’t need to be a stat wizard to think clearly. But you should understand what statistical significance means. When we study something, we have various ways to compute whether the result is caused by chance or not. These are reported as probabilities. For instance, we might say that we found a difference between treatments A and B and there’s only a 1% probability that the difference was caused by chance. That’s not bad. But notice that we’re not saying that there are big differences between A and B. Indeed, the differences might be quite small. The differences are not “significant” in terms of size but rather in terms of probability.

When you trying to discover whether something is true or not, keep these steps and processes in mind. It’s good to be skeptical. If something sounds too good to be true, … well, that’s a good reason to be doubtful. But don’t be cynical. Sometimes miracles do happen.

Leave a Reply

Your email address will not be published. Required fields are marked *

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives