Strategy. Innovation. Brand.

logic

Questions, Proxies, and Health

When faced with a difficult question, we often substitute a simpler question and answer that instead. Here are three examples:

  • What’s the crime rate in your neighborhood? – We probably don’t keep close tabs on the crime rate. We’re not naturally good at statistics either so it’s difficult to develop a reasonable estimate. So we substitute a different question: How easy is it for me to remember a crime committed in my neighborhood? If it’s easy to remember, we guess that the neighborhood has a high crime rate. If it’s hard to remember, we guess a lower rate. (I’ve adapted this example from Daniel Kahneman’s book, Thinking Fast and Slow).
  • How’s your car running? – It’s hard to know how well a car is running in this age of sophisticated electronics. So we answer a different question: How does the car sound? If it’s not making strange noises – knocks and pings – it must be running well and performing optimally.
  • How effective is your shampoo? – I suppose we could study our hair’s health every day but most of us don’t. So we answer a simpler question: How much lather does your shampoo produce? If we get a lot of lather, the shampoo must be effective.
How many do I need to get 10 calories of energy?

How many do I need to get 10 calories of energy?

In each case, we substitute a proxy for the original question. We assume that the proxy measures the same thing that the original question aimed to measure. Sometimes we’re right; sometimes we’re wrong. Most often, we don’t think about the fact that we’re using a proxy. System 1 does the thinking for us. But we can, in fact, bring the proxy to System 2 and evaluate whether it’s effective or not. If we think about it, we can use System 2 to spot errors in System 1. But we have to think about it.

As it happens, System 1 uses proxies in some situations that we might never think about. Here’s an example: How much food should you eat?

(The following is based on a study from the University of Sydney. The research article is here. Less technical summaries are here and here).

We tend to think of food in terms of quantity. System 1 also considers food as a source of energy. System 1 is trying to answer two questions: 1) How much energy does my body need? 2) How much food does that translate to?

Our bodies have learned that sweet food delivers more energy than non-sweet food and can use this to translate from energy needs to food requirements. Let’s say that the equation looks something like this:

1 calorie* of energy is generated by 10 grams of sweet food

Let’s also assume that our body has determined that we need 10 calories of energy. A simple calculation indicates that we need to eat 100 grams of sweet food. Once we’ve eaten 100 grams, System 1 can issue a directive to stop eating.

Now let’s change the scenario by introducing artificial sweeteners that add sweetness without adding many calories. The new translation table might look like this:

1 calorie of energy is generated by 30 grams of artificially sweetened food

If we still need 10 calories of energy, we will need to eat 300 grams of artificially sweetened food. System 1 issues a directive to stop only after we’ve eaten the requisite amount.

System 1 can’t tell the difference between artificially and naturally sweetened foods. It has only one translation table. If we eat a lot of artificially sweetened food, System 1 will learn the new translation table. If we then switch back to naturally sweetened foods, System 1 will still use the new translation table. It will still tell us to eat 300 grams of food to get 10 calories of energy.

We would never know that our brain makes energy/quantity assumptions if not for studies like this one. It’s not intuitively obvious that we need to invoke System 2 to examine the relationship between artificial sweeteners and food intake. But like crime rates or cars or shampoos, we often answer different questions than we think we’re answering. To think more clearly, we need to examine our proxies more carefully.

*It’s actually a kilocalorie of energy but we Americans refer to it as a calorie.

Want A Good Ad? Conceal The Premise.

We’re all more or less familiar with the syllogism. The idea is that we can state premises – with certain rules – and draw conclusions that are logically valid. So we might say:

Cute. Must be a great car.

Cute. Must be a great car.

Major premise:  All humans are mortal.

Minor premise:  Travis is a human.

Conclusion: Therefore, Travis is mortal.

In this case, the syllogism is deemed valid because the conclusion flows logically from the premises. It’s also considered sound since both premises are demonstrably true. Since the syllogism is both valid and sound, the conclusion is irrefutable.

We often think in syllogisms though we typically don’t realize it. Here’s one that I go through each morning:

Major premise:  People get up when the sun rises.

Minor premise:  The sun is rising.

Minor premise:   I’m a person.

Conclusion:       Therefore, I need to get up.

I don’t usually think, “Oh good for me … another syllogism solved”. Rather, I just get out of bed.

We often associate syllogisms with logic but we can also use them for persuasion. Indeed, Aristotle identified a form of syllogism that he believed was more persuasive than any other form of logic.

Aristotle called it an enthymeme – it’s simply a syllogism with an unstated major premise. Since the major premise is assumed rather than stated, we don’t consider it consciously. We don’t ask ourselves, Is it valid? Is it sound? We just assume that everything is correct and get on with life.

Though they don’t use the terminology, advertisers long ago discovered that enthymemes are powerful persuaders. People who receive the message don’t consciously examine the premise. That’s exactly what advertisers want.

As an example, let’s dissect one of my favorite ads: the 2012 Volkswagen Passat ad featuring the kid in the Darth Vader costume. The kid wanders around the house trying to use “The Force” to turn on the TV, cook lunch, and so on. Of course, it never works. Then Dad comes home, parks his new Passat in the driveway, and turns it off. The kid uses the force to turn it back on. Dad recognizes what’s going on and uses his remote starter to start the car just as the kid hurls the force in the right direction. The car starts, the kid is amazed, and we all love the commercial.

So what’s the premise? Here’s how the ad works:

Major (hidden) premise:    Car companies that produce loveable ads also

                                    produce superior cars.

Minor premise:                 VW produced a loveable ad.

Conclusion:                     Therefore, VW produces superior cars.

When we think about the major premise, we realize that it’s illogical. The problem is that we don’t think about it. It enters our subconscious mind (System 1) rather than our conscious mind (System 2). We don’t examine it because we’re not aware of it.

Here’s another one. I’ve seen numerous ads in magazines that tout a product that’s also advertised on TV. The magazine ads often include the line: As Seen On TV. Here’s the enthymeme:

Major (hidden) premise:    Products advertised on TV are superior to

                                     those that aren’t advertised on TV.

Minor premise:                 This product is advertised on TV

Conclusion:                      Therefore, it’s a superior product.

When we consciously examine the premise, we realize that it’s ridiculous. The trick is to remind ourselves to examine the premise.

If you want to defend yourself against unscrupulous advertisers (or politicians), always be sure to ask yourself, What’s the hidden premise?

Human 2.0

Human 2.0

Human 2.0

When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.

Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.

I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.

Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?

As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.

But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.

So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.

When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.

But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.

If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.

If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.

The Mother Of All Fallacies

An old script, it is.

An old script, it is.

How are Fox News and Michael Moore alike?

They both use the same script.

Michael Moore comes at issues from the left. Fox News comes from the right. Though they come from different points on the political spectrum, they tell the same story.

In rhetoric, it’s called the Good versus Evil narrative. It’s very simple. On one side we have good people. On the other side, we have evil people. There’s nothing in between. The evil people are cheating or robbing or killing or screwing the good people. The world would be a better place if we could only eliminate or neuter or negate or kill the evil people.

We’ve been using the Good versus Evil narrative since we began telling stories. Egyptian and Mayan hieroglyphics follow the script. So do many of the stories in the Bible. So do Republicans. So do Democrats. So do I, for that matter. It’s the mother of all fallacies.

The narrative inflames the passions and dulls the senses. It makes us angry. It makes us feel that outrage is righteous and proper. The narrative clouds our thinking. Indeed, it aims to stop us from thinking altogether. How can we think when evil is abroad? We need to act. We can think later.

I became sensitized to the Good versus Evil narrative when I lived in Latin America. I met more than a few people who are convinced that America is the embodiment of evil. They see it as a country filled with greedy, immoral thieves and murderers who are sucking the blood of the innocent and good people of Latin America. I had a difficult time squaring this with my own experiences. Perhaps the narrative is wrong.

Rhetoric teaches us to be suspicious when we hear Good versus Evil stories. The word is a messy, chaotic, and random place. Actions are nuanced and ambiguous. People do good things for bad reasons and bad things for good reasons. A simple narrative can’t possibly capture all the nuances and uncertainties of the real world. Indeed, the Good versus Evil doesn’t even try. It aims to tell us what to think and ensure that we never, ever think for ourselves.

When Jimmy Carter was elected president, John Wayne attended his inaugural even though he had supported Carter’s opponent. Wayne gave a gracious speech. “Mr. President”, he said, “you know that I’m a member of the loyal opposition. But let’s remember that the accent is on ‘loyal’”. How I would love to hear anyone say that today. It’s the antithesis of Good versus Evil.

Voltaire wrote that, “Anyone who has the power to make you believe absurdities has the power to make you commit injustices.” The Good versus Evil narrative is absurd. It doesn’t explain the world; it inflames the world. Ultimately, it can make injustices seem acceptable.

The next time you hear a Good versus Evil story, grab your thinking cap. You’re going to need it.

(By the way, Tyler Cowen has a terrific TED talk on this topic that helped crystallize my thinking. You can find it here.)

Fallacy of Fallacies

It's true!

It’s true!

Let’s talk about logic for a moment. When you hear the word argument, you may think of a heated exchange of opinions. It’s emotional and angry. A logician would call this a quarrel rather than an argument. In the world of logic, an argument means that you offer reasons to support a conclusion.

An argument can be valid or invalid and sound or unsound. Here’s an example of an argument in a classic form:

Premise 1:      All women have freckles.

Premise 2:      Suellen is a woman.

Conclusion:     Suellen has freckles.

We have two reasons that lead us to a conclusion. In other words, it’s an argument. Is it a good argument? Well, that’s a different question.

Let’s look first at validity. An argument is valid if the conclusion flows logically from the premises. In this case, we have a major premise and a minor premise and – if they are true – the conclusion is inescapable. Suellen must have freckles. The conclusion flows logically from the premises. The argument is valid.

But is the argument sound? An argument is sound if the premises are verifiably true. The second premise is verifiably true – Suellen is indeed a woman. But the first premise is not verifiably true. All we have to do is look around. We’ll quickly realize that the first premise is false – not all women have freckles.

So, the argument is valid but unsound. One of the premises that leads to the conclusion is false. Can we safely assume, then, that the conclusion is also false? Not so fast, bub.

This is what’s known as the fallacy of fallacies. We often assume that, if there’s a fallacy in an argument, then the conclusion must necessarily be false. Not so. It means the conclusion is not proven. The fact that something is not proven doesn’t necessarily mean that it’s false. (Indeed, in technical terms, we’ve never proven that smoking causes cancer in humans).

Our example demonstrates the fallacy of fallacies. We agree that the argument is valid but not sound. One of the premises is false. Yet, if you know Suellen, you know that the conclusion is true. She does indeed have freckles. So even an unsound (or invalid) argument can result in a conclusion that’s true.

What’s the moral here? There’s a big difference between not proven and not true. Something that’s not proven may well be true. That’s when you want to consider Pascal’s Wager.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives