Strategy. Innovation. Brand.

Critical Thinking

Pascal’s Wager and the Mediterranean Diet

Wanna bet?

Wanna bet?

I like to think of Blaise Pascal (1623 — 1662), the French mathematician, as the western world’s first practitioner of Twitter. His collected Pensées were brief, enigmatic thoughts about mathematics, religion, and philosophy. Collected after his death, they read like the tweets of the 17th century (though they were intended to be a much more comprehensive defense of religion).

In the Pensées, Pascal made his famous wager. We all bet with our lives on whether God exists or not. We can live as if God exists and practice the traditional forms and virtues of religion. Or we can do the opposite and ignore our religious duties, assuming that God does not exist. If we live as if God exists and we’re right then the rewards are infinite. If we’re wrong, the loss is finite — indeed it’s quite small. Thus, Pascal argues, it’s only rational to live a pious life. The wager is heavily stacked to that side.

In today’s world, we don’t spend much time wagering on God’s existence (perhaps we should) but we make lots of bets that are much like Pascal’s. The cumulative effects are enormous.

For instance, consider the Mediterranean diet. The diet — which features olive oil, nuts, vegetables, and fish but not much red meat — has been on our radar for a number of years now. Epidemiologists observed that people who live near the Mediterranean have a much lower rate of heart disease than would be expected. Maybe it’s the diet. Or maybe it’s something else, like religion, culture, family structure, heredity, etc.

So the evidence for the positive health effects of the diet was an observed correlation. We could see that the diet was correlated (inversely) to heart disease but we couldn’t be sure that it caused the lower rates. Maybe a hidden, third factor was in play. Still, we could make a version of Pascal’s wager: eat as if the Mediterranean diet does reduce the risk of heart disease. If we’re right, we live longer. If we’re wrong … well, we’ve missed out on a few tasty bacon cheeseburgers. Would you take the bet?

Last week, the level of evidence changed dramatically. A five-year, randomized Spanish study of nearly 7,5000 people was published. People who followed the Mediterranean diet had 30% fewer heart attacks, strokes, and deaths from heart disease than the control group. Since the study used the experimental method, we can now talk about cause and effect, not just correlation. Now will you take the bet?

Of course, there’s still some doubt about the results. It’s only one study; it hasn’t been replicated yet. It was conducted in Spain. Maybe we wouldn’t get the same results in America. Maybe there were methodological or measurement errors. Still, the evidence seems pretty strong and it points toward a version of Pascal’s classic wager.

We all make versions of Pascal’s wager every day but we rarely think about them. Perhaps it’s time that we pay more attention. Perhaps it’s time to think about what levels of evidence we need before we take the bet. Is correlation enough or do we need to prove cause and effect? Life is uncertain but perhaps we can make it more comfortable by thinking — and betting — logically. While you’re pondering that, I’m going to drizzle some olive oil over a bowl of walnuts. Drop by if you’re hungry.

Culture — Would You Laugh At Your Boss?

We must be in a high PDI zone.

We must be in a high PDI zone.

Would you make fun of your boss… to her face?  If you’re from Denmark, you might. If you’re from Slovakia, probably not.

That’s one of the conclusions you might draw from the Power Distance Index (PDI), a measure of one of the “dimensions” of culture. Since the early 20th century, social scientists have worked to classify human cultures and measure how they differ from each other. The dimensions deal with fundamental concepts, like how we conceive of ourselves as individuals, the relationship between the individual and the group, how men and women relate to each other, and how we handle conflict. From these foundations, different observers have developed different numbers of cultural dimensions, ranging from a low of four to a high of nine.

Lately, I’ve been reading the work of Geert and Gert Jan Hofstede, a father-son team of cultural researchers from the Netherlands. In their book, Culture and Organization: Software of the Mind, they suggest that there are five cultural dimensions: 1) power distance; 2) individualism versus collectivism; 3) femininity versus masculinity; 4) uncertainty avoidance; 5) long-term versus short-term orientation. I’d like to look at all five of these — and their inter-relationship — over the coming weeks. Today, let’s look at power distance. (By the way, one of the reasons I like the Hofstedes is that they relate their findings to the workplace. The last several chapters of their book offer practical advice on managing in a multicultural world).

The Hofstedes define power distance as “the extent to which the less powerful … [citizens] … of a country expect and accept that power is distributed unequally.” The Hofstedes developed an index to measure power distance within a country and applied it to 74 different countries. The five countries with the highest PDI scores are Malaysia (PDI = 104), Slovakia (104), Guatemala (95), Panama (95), and Philippines (94). The five with lowest scores are German-speaking Switzerland (26), New Zealand (22), Denmark (18), Israel (13), and Austria (11). (The United States has a PDI of 40)

Power distance affects cultures in myriad ways. Countries with low PDIs generally believe that “inequalities among people should be minimized.” On the other hand, those with high PDIs generally believe that “Inequalities among people are expected and desired”. In low PDI countries, “parents treat children as equals”; in high PDI countries, “parents teach children obedience”.

In the workplace, differences between low- and high-PDI countries can be pronounced. In low-PDI countries, the workplace is generally decentralized with few supervisors. The ideal boss is a resourceful democrat and status symbols are generally frowned upon. In high-PDI countries, the situation is essentially reversed: the workplace is centralized with a large number of supervisors. The ideal boss is a benevolent autocrat and status symbols are normal and popular.

Here are two findings that struck me as very odd. In Europe and the Americas, countries with Romance languages had — on average — higher PDI scores than those with Germanic languages. In the popular imagination, Germanic countries are often perceived to be very hierarchical. The Hofstedes’ research suggests that the opposite is true. Austria (PDI= 11) has the lowest PDI of any of the 74 countries. German-speaking Switzerland has a PDI of 26 which compares to 70 for French-speaking Switzerland. Germany itself has a score of 35.

The other oddity is that PDI scores tend to drop the farther north (or the farther south) you move from the equator. Tropical lands tend to have higher power distances than temperate lands. Here are scores for some of the more northern (and southern) countries: Denmark (18), New Zealand (22), Ireland (28), Sweden (31), Norway (31), Finland (33), Australia (36), Canada (39). The Hofstedes suggest some possible reasons for this distribution but their ideas seem a bit too pat to me. While I’m scartching my head as to the cause, it may be that we’re simply measuring language differences again. None of the northern countries I’ve just listed speaks a Romance language.

So, what does it all mean? I’m still working on that. I do find it very interesting that the Nordic countries are clustered at the very low end of the PDI table while they’re at the very high end of the World Happiness Report and the Global Innovation Index. There’s something interesting in the state of Denmark. Think about that for a while. In the meantime, don’t laugh at your boss.

Critical Thinking. What’s That?

No, thanks. I don't want my doc to mis-diagnose me.

No, thanks. I don’t want my doc to mis-diagnose me.

I visited my doctor the other day. Expecting a bit of a wait, I took along Critical Thinking, a textbook for one of my courses. When I walked into the doctor’s office, he read the title and said, “Hmmm … critical thinking. What’s that?” I thought, “My doc just asked me what critical thinking is. This can’t be a good sign.”

I quelled my qualms, however, and explained what I teach in critical thinking class. He brightened up immediately and said, “Oh, that’s just like How Doctors Think.” I pulled out my Kindle and downloaded the book immediately. Understanding how doctors think might actually help me get better medical care.

So how do they think? Well, first they use shortcuts. They generally have way too much information to deal with and use rules of thumb called heuristics. Sound familiar? I’ve written several articles about rules of thumb and how they can lead us astray. (Just look for “thumb” in this website’s Blog Search box). So, the first answer is that doctors think just like us. Is that a good thing? Here are some errors that doctors commonly make:

Representation error — the patient is a picture of health. It’s not likely that those chest pains are a cause for concern. With this error, the doctor identifies a prototype that represents a cluster of characteristics. If you fit the prototype, fine. If not, the doctor may be diagnosing the prototype rather than you.

Attribution error — this often happens with negative stereotypes. The patient is disheveled and smells of booze. Therefore, the tremors are likely caused by alcohol rather than a hereditary disease that causes copper accumulation in the liver. That may be right most of the time but when it’s wrong, it’s really wrong.

Framing errors — I’ve read the patient’s medical charts and I see that she suffers from XYZ. Therefore, we’ll treat her for XYZ. The medical record forms a frame around the patient. Sometimes, doctors forget to step outside the frame and ask about other conditions that might have popped up. Sometimes the best approach is simply to say, “Let me tell you my story.”

Confirmation bias — we see things that confirm our beliefs and don’t see (or ignore) things that don’t. We all do it.

Availability bias — if you’re the 7th patient I’ve seen today and the first six all had the flu, there’s a good chance that I’ll diagnose you with flu, too. It just comes to mind easily; it’s readily available.

Affective bias — the doctor’s emotions get in the way. Sometimes these are negative emotions. (Tip: if you think your doctor feels negatively about you, get a new doctor). But positive emotions can also be harmful. I like you and I don’t want to cause you pain. Therefore, I won’t order that painful, embarrassing test — the one that might just save your life.

Sickest patient syndrome — doctors like to succeed just like anyone else does. With very sick patients, they may subconsciously conclude that they can’t be successful … and do less than their best.

The list goes on … but my space doesn’t. When I started  the book I thought it was probably written for doctors. But the author, Jerome Groopman, says it’s really for us laypeople. By understanding how doctors think, we can communicate more effectively with our physicians and help them avoid mistakes. It’s a good thought and a fun read.

The Structure of Predictions

The cost of my services may go down.

The cost of my services may go down.

I like to think about the future. So, in the past, I’ve written about scenario planning, prediction markets, resilience, and expert predictors. What have I learned in all this? Mainly, that experts regularly get it wrong. Also, that experts move in herds — one expert influences another and they begin to mutually reinforce each other. In the worst cases, we get manias, whether it’s tulip mania in 17th century Holland or mortgage mania in 21st century America. Paying ten times your annual income for a tulip bulb in 1637 is really not that different from Bank of America paying $4 billion for Countrywide.

I’ve also learned that you can (sometimes) make a lot of money by betting against the experts. The clearest description of “shorting” the experts is probably The Big Short by Michael Lewis.

I’m also forming the opinion that the reason we call people “experts” is because they study problems closely. They’re analysts; they study the details. Like college professors, they know a lot about a little. That may make them interesting dinner partners (or not) but does it make them better predictors of the future?

I’m thinking that the experts’ “close read” makes them worse predictors of the future, not better. Why? Because they go inside the frame of the problems. They pursue the internal logic of the story. Studying the internal logic of a situation can be useful but, as I pointed out in a recent article, it can also lead you astray. In addition to the internal logic, you need to step outside the frame and study the structure of the problem. If you stay inside the frame, you may well understand the internal dynamics of the issue. But, in many cases, the external dynamics are more important.

The case that I’ve been following is the cost of healthcare in the United States. The experts all seem to be pointing in the same direction: healthcare costs will continue to skyrocket and ultimately bankrupt the country. The experts are pointing in one direction so, as in the past, I think it’s useful to look in the other direction and predict that healthcare costs won’t climb as rapidly as in the past or may even go down.

Here are two interesting pieces of evidence that suggest that the experts may be wrong. The first is a report from the Altarum Institute which notes that 2012 represented the “…fourth consecutive year of record-low growth [in healthcare spending] compared to all previous years in the 50-plus years of official health spending data.” Granted, there’s a debate as to whether the slowing growth is caused by the recession or by structural changes but the experts (yikes!) suggest that at least some of the shift is structural.

The second piece of evidence is a report by Matthew Yglesias in Slate that documents the dramatic decline in spending for healthcare construction. Spending to construct new hospitals dropped precipitously in 2008 and has stayed low, even during the recovery. As Yglesias points out, construction spending is “the closest thing we have to a real-time forecast of what the future is going to look like.”

So, are the experts wrong? As Chou En Lai liked to say, it’s too soon to tell. But let’s keep an eye on them. Otherwise, we could be framed.

 

Thinking Outside the Frame

Make way!

Make way!

An ambulance, racing to the hospital, siren blaring, approaches an intersection. At the same time, from a different direction, a fire truck, racing to a fire, approaches the same intersection. From a third direction, a police car screeches toward the same intersection, responding to a burglary-in-progress call. From a fourth direction, a U.S. Mail truck trundles along to the same intersection. All four vehicles arrive at the same time at the same intersection controlled by a four-way stop sign. Who has the right of way?

The way I just told this story sets a frame around it that may (or may not) guide your thinking. You can look at the story from inside the frame or outside it. If you look inside the frame, you’ll pursue the internal logic of the story. The three emergency vehicles are all racing to  save people — from injury, from fire, or from burglary. Which one of those is the worst case? Which one deserves to go first? It’s a tough call.

On the other hand, you could look at the story outside the frame. Instead of pursuing the internal logic, you look at the structure of the story. Rather than getting drawn into the story, you look at it from a distance. One of the first things you’ll notice is that three of the vehicles belong to the same category — emergency vehicles in full crisis mode. The fourth vehicle is different — it’s a mail truck. Could that be a clue? Indeed it is. The “correct” answer to this somewhat apocryphal story is that the mail truck has the right of way. Why? It’s a federal government vehicle and takes precedence over the other, local government vehicles.

In How Doctors Think, Jerome Groopman describes how doctors think inside the frame. A young woman is diagnosed with anorexia and bulimia. Many years later, she’s doing poorly and losing weight steadily. Her medical file is six inches thick. Each time she visits a new doctor, the medical file precedes her. The new doctor reads it, discovers that she’s bulimic and anorexic and treats her accordingly. Finally, a new doctor sets aside her record, pulls out a blank sheet of paper, looks at the woman and says, “Tell me your story.” In telling her own story, the woman gives important clues that leads to a new diagnosis — she’s gluten-intolerant. The new doctor stepped outside the frame of the medical record and gained valuable insights.

According to Franco Moretti, similar frames exist in literature — they’re called books. Traditional literary analysis demands that you read books and study them very closely. Moretti, an Italian literary scholar, calls this close reading — it’s studying literature inside the frame set by the book. Moretti advocates a different approach that he calls distant reading….”understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.” Only by stepping back and reading ourside the frame, can we understand “…the true scope and nature of literature.”

In each of these examples we have a frame. In the first story, I set the frame for you. It’s a riddle and I was trying to trick you. In the second story, the patient’s medical record creates the frame. In the third, the book sets the frame. In each case, we can enter the frame and study the problem closely or we can step back and observe the structure of the problem. It’s often a good idea to step inside the frame — after all, you usually do want your doctor to read your medical file. But it’s also useful to step outside the frame, where you can find clues that you would never find by studying the internal logic of the problem. In fact, I think this approach can help us understand “big” predictions like the cost of healthcare. More on that next Monday.

 

 

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives