Strategy. Innovation. Brand.

Critical Thinking

Linearly Nonlinear

Take two!

Take two!

If taking one vitamin pill per day is good for you, then taking two pills per day should be better, shouldn’t it? If two is better than one, then three should be better than two.

If we continue to follow this logic, we’ll soon be taking a whole bottle of vitamins every day. Why don’t we? Because of two limiting factors:

  • Diminishing returns – each additional pill gives less benefit than the previous pill;
  • Negative returns – beyond a certain point, each additional pill makes things worse.

It’s easy to figure this out for simple items like vitamin pills. But, in more complex decisions, we tend to have a linear bias. If there’s a linear relationship between two variables, we assume that the line continues forever.

Let’s take schools, for instance. In the United States, we’re obsessed with measuring student performance in schools and tracking it over time. We create performance tables to identify the schools that provide the best education and those that provide the worst.

You may notice that small schools are often at the top of the charts. You might conclude that school size and performance are linearly related. It might be wise to build more small schools and fewer large schools. Unfortunately, you’re suffering from linear bias.

To find the error, just look at the bottom of the performance charts. You’ll probably find many small schools there as well. Small schools dominate the top and bottom of the chart; large schools tend to fall into the middle range.

What’s going on? It’s the variability of small samples. If you flip a coin ten times, you might get eight tails. If you flip the same coin a thousand times, it’s very unlikely that you’ll get 800 tails. With larger samples, things tend to even out.

The same happens in schools. Larger schools have larger samples of students and their performance tends to even out. Performance in small schools is much more variable, both upward and downward. The relationship between school size and performance is a curve, not a straight line.

For the same reason, I was briefly (but gloriously) the most accurate shooter on my high school basketball team. After three games, I had taken only one shot, but I made it! In other words, I made 100% of my shots – the top of the performance chart. But what if I had missed that one shot? My accuracy would have fallen to 0%, the very bottom of the chart. With one flip of my wrist, I could have gone from best to worst. That’s the volatile variability of small samples.

A straight line is the simplest relationship one can find between two variables. I generally believe that simpler is better. But many relationships simply aren’t simple. They change in nonlinear ways. By trying to make them linear, we over-simplify and run the risk of significant mistakes. Here are two:

  • If an hour of exercise is good for you, then two hours must be better. The assumption is that more exercise equals better health. It’s a linear relationship. But is it really? I have friends who exercise for hours a day. I worry for their health (and sanity).
  • If cutting taxes by 10% is good for the economy, then cutting taxes by 20% must be better. We assume that lower taxes stimulate the economy in a linear fashion. But, at some point, we must get negative returns.

What’s the bottom line here? If someone argues that the relationship between two variables is a straight line, take it with a grain of salt. And if one grain of salt is good for you, then two grains should be better. And if two grains are better, then three grains … well, you get the picture.

(I adapted the school example from Jordan Ellenberg’s excellent new book, How Not To Be Wrong: The Power Of Mathematical Thinking).

Survivorship Bias

Protect the engines.

Protect the engines.

Are humans fundamentally biased in our thinking? Sure, we are. In fact, I’ve written about the 17 biases that consistently crop up in our thinking. (See here, here, here, and here). We’re biased because we follow rules of thumb (known as heuristics) that are right most of the time. But when they’re wrong, they’re wrong in consistent ways. It helps to be aware of our biases so we can correct for them.

I thought my list of 17 provided a complete accounting of our biases. But I was wrong. In fact, I was biased. I wanted a complete list so I jumped to the conclusion that my list was complete. I made a subtle mistake and assumed that I didn’t need to search any further. But, in fact, I should have continued my search.

The latest example I’ve discovered is called the survivorship bias. Though it’s new to me, it’s old hat to mathematicians. In fact, the example I’ll use is drawn from a nifty new book, How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenberg.

Ellenberg describes the problem of protecting military aircraft during World War II. If you add too much armor to a plane, it becomes a heavy, slow target. If you don’t add enough armor, even a minor scrape can destroy it. So what’s the right balance?

American military officers gathered data from aircraft as they returned from their missions. They wanted to know where the bullet holes were. They reasoned that they should place more armor in those areas where bullets were most likely to strike.

The officers measured bullet holes per square foot. Here’s what they found:

Engine                     1.11 bullet holes per square foot

Fuel System              1.55

Fuselage                   1.73

Rest of plane             1.8

Based on these data, it seems obvious that the fuselage is the weak point that needs to be reinforced. Fortunately, they took the data to the Statistical Research Group, a stellar collection of mathematicians organized in Manhattan specifically to study problems like these.

The SRG’s recommendation was simple: put more armor on the engines. Their recommendation was counter-intuitive to say the least. But here’s the general thrust of how they got there:

  • In the confusion of air combat, bullets should strike almost randomly. Bullet holes should be more-or-less evenly distributed. The data show that the bullet holes are not evenly distributed. This is suspicious.
  • The data were collected from aircraft that returned from their missions – the survivors. What if we included the non-survivors as well?
  • There are fewer bullet holes on engines than one would expect. There are two possible explanations: 1) Bullets don’t strike engines for some unexplained reason, or; 2) Bullets that strike engines tend to destroy the airplane – they don’t return and are not included in the sample.

Clearly, the second explanation is more plausible. Conclusion: the engine is the weak point and needs more protection. The Army followed this recommendation and probably saved thousands of airmen’s lives.

It’s a colorful example but may seem distant form our everyday experiences. So, here’s another example from Ellenberg’s book. Let’s say we want to study the ten-year performance of a class of mutual funds. So, we select data from all the mutual funds in the category from 2004 as the starting point. Then we collect similar data from 2014 as the end point. We calculate the percentage growth and reach some conclusions. Perhaps we conclude that this is a good investment category.

What’s the error in our logic? We’ve left out the non-survivors – funds that existed in 2004 but shut down before 2014. If we include them, overall performance scores may decline significantly. Perhaps it’s not such a good investment after all.

What’s the lesson here? Don’t jump to conclusions. If you want to survive, remember to include the non-survivors.

Memo To Self

Me or not-me?

Me or not-me?

Let’s say you’re an army general and you want to move 1,000 troops from Point A to Point B. You’ll probably send out two types of orders. First, you’ll send direct orders to your officers, telling them how, when, and where to move.

Second, you’ll also send advisories to other units who need to be aware of your movements, including commissary, quartermaster, and transportation units. Though they don’t report directly to you, they need to know what your troops are doing. Otherwise, supplies, food, and ammunition will be in the wrong place at the wrong time. Chaos ensues.

According to Patricia Churchland in her brief-but-insightful book, Touching a Nerve, our brains essentially behave the same way. Let’s say your brain tells your eyes to move to the right. That’s pretty simple. But you also need to let the rest of your brain know what’s happening.

When your eyes move right, your brain could interpret it in at least two ways:

1) My eyes just moved to the right, or;

2) The whole world just moved to the left.

The second interpretation is scary. The world moves in an unpredictable manner. You didn’t cause the movement. So, what did? Is someone playing a trick on you? Are malevolent spirits up to no good?

You can get an inkling of how this feels just by sitting in a car. If the car next to you rolls forward, you may feel that you’re rolling backwards. It’s a startling and unsettling experience until you realize what’s actually happening. Now imagine that all of your actions feel the same way. Your arm moves but not because of you. If you didn’t cause it, who or what did? Is it really your arm or an impostor?

We normally solve this problem by sending a memo to ourselves known as the efference copy. In essence, it’s a copy of the direct order sent to the muscle(s) in question. It lets the relevant portions of your brain know that you’re causing the action. It explains what’s going on. The world is not acting on you. You’re acting on the world.

Churchland speculates that problems with the efferent copy could be at the root of many mental disorders. (Churchland is not arguing that this is proven, only that it’s a fertile ground for research). A simple example is that we (normally) can’t tickle ourselves. We know – via the efference copy – that we’re the one taking the action. We’re making something happen. When other people tickle us, there is no efference copy. Something is happening to us. On the other hand, people with efference copy problems can indeed tickle themselves. It’s as if something is happening to them.

Similarly, we all hear voices in our heads. But most of us realize that the voice is our own. What if you didn’t? Whose voice would it be? A dead relative? God?

Ultimately, this is a question of me versus not-me. Most of us have a pretty clear idea of what me consists of. Even very young infants have a pretty clear idea of what their boundaries are. We learn to send memos to ourselves very early on. For some people, however, the memo never arrives. Chaos ensues.

Are Your Kith Couth?

Get some new kith.

Get some new kith.

Are you uncouth?

If so, it’s probably because your kith are not doing their job properly. It’s not your fault. You’re just running with the wrong crowd.

As Alex Pentland has pointed out, the words kith and couth are very much related. One is the input; the other is the output. Your kith – as in kith and kin – are close friends and neighbors that form a fairly cohesive group. Your kith – along with your kin – are the people who teach you how to behave couthly. If you’re uncouth, you might want to choose some new kith.

Pentland is generally regarded as the founder of social physics which (in my opinion) is an old idea that has been re-energized with big data and mathematic modeling.

The old idea is simply that the people around us influence our behavior. My mother clearly understood this when she told me, “Don’t run with that crowd. They’ll get you in trouble.” It’s also why you shouldn’t have a heart attack in a crowd. It’s also why you’re better off alone when shot down behind enemy lines.

But how much do the people around us influence our behavior? How much do we decide as individuals and how much is decided for us as members of a group? Are we individuals first and social animals second? Or vice-versa?

This is where Pentland and the social physicists come in. Using mathematical models and tracing communications via mobile phone, the social physicists start to quantify the issue.

For instance, Pentland and his colleagues studied income distribution in 300 cities in the United States and Europe. They concluded that, “variations in the pattern of communication accounted for almost all of the differences in average earnings – much more important than variations in education or class structure.” The more you share ideas, the more rapidly your income grows. Yet another advantage for living in cities.

Pentland also experiments with incentives. Let’s say you want to incent me to lose weight. You could pay me a bonus for each pound I lose. Or you could pay the bonus to a close friend of mine, while paying me nothing. Which works better? According to the social physicists, paying my friend works four times better than paying me.

The social physicists demonstrate over and over again that it’s the sharing of ideas that counts. Creativity in isolation generates little to no benefit. It’s only by putting creativity in circulation that we gain. It even works for financial traders. Pentland studied 10 million trades by 1.6 millions users of a financial trading site. “He found that traders who were isolated from others or over-connected did worse than those who struck a balance. The former group was deprived of information and the latter became stuck in an echo chamber.”

What’s it all mean? First and foremost, choose your friends wisely. Pentland concludes that, “The largest single factor driving adoption of new behaviours was the behaviour of peers. Put another way, the effects of this implicit social learning were roughly the same size as the influence of your genes on your behaviour, or your IQ on your academic performance.”

The Brighter Side of Spite

In the dumpster?!

In the dumpster?!

In October 2013, a Boulder, Colorado man took some half million dollars out of savings, converted it to gold and silver bars and threw them in a dumpster. What would account for such behavior? Spite. After a bitter divorce, the man didn’t want his ex-wife to get any of the money.

Spite has a long history. As Natalie Angier points out, spite is the driving force behind the Iliad. Achilles wants revenge on Agamemnon, even though it will be very painful to Achilles as well.

Spite is similar to altruism but with a different purpose. An altruistic person pays a personal price to do something helpful to another person.  A spiteful person pays a personal price to do something hurtful to another person.

Spitefulness sometimes feels good. You’re getting even; you’re teaching the other person a lesson. But it rarely does any good. Does the other person really learn a lesson – other than to despise you? With spite, both parties lose. So, why does spitefulness stick around?

It could be a form of altruistic punishment. Altruism isn’t always positive for everyone concerned. You might punish somebody — and pay a price to do so — in order to bring a greater good to a larger community. In this sense, altruistic punishment is simply spite for the greater good.

A study by Karla Hoff in 2008 used a “trust game” to probe this phenomenon. In the game, trusting players can earn more money by giving away money. But a “free rider” (also known as an opportunist) could take advantage of the trusting player, hoard the money, and come out ahead. The game uses an “enforcer’ who can choose multiple options, including various punishments for the free rider.

Punishing the opportunist costs the enforcer. Still, in many cases enforcers decided to do just that. By spiting the free rider, the enforcer adds a cost to anti-social behavior. As opportunism become more costly, it also becomes less pervasive. Ultimately the enforcer’s spite encourages cooperation. It’s good for the community even though it hurts the enforcer. (This was a complex study and altruistic punishment varied by culture and by the social status of the various players).

More recently, Rory Smead and Patrick Forber used an “ultimatum game” to study spite and fairness. In some versions of the game, “gamesmen” emerge who make only unfair offers. Other players will spite the gamesman. Even though they pay a cost in the short run, fair players who spite the gamesman can benefit in the long run. Indeed, “Fairness actually becomes a strategy for survival in this land of spite.”

How do you measure spitefulness? David Marcus and his colleagues have developed a 17-point Spitefulness Scale “…to assess individual differences in spitefulness.” They then applied it across a large random sample of college students and adults. They found (among many other things) that men are generally more spiteful than women and young people are more spiteful than older people. Spitefulness is positively correlated with aggression and narcissism but negatively correlated to self-esteem. The researchers are now going to use the scale to predict how different players will act in trust and ultimatum games.

I’ve previously written about seemingly “good things” that produce bad outcomes. Spite is a good example of a “bad thing” that can produce good outcomes. Not always and not in all situations, but more often than we might guess. It’s useful to keep in mind that, if something exists, it often does so for a good reason.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives