To answer the question, you’ll need to do a fair amount of research. You might dig through police reports, census data, city government publications, and so on. It’s a lot of work.
But our brains don’t like to work. As Daniel Kahneman writes, “Thinking is to humans as swimming is to cats. They can do it, but they prefer not to.”
So, instead of answering the original question, we substitute a simpler question: How much crime can I remember in my neighborhood?
If we can remember a lot of crime – if it’s top of mind — we’ll guess that our neighborhood has a high crime rate. If we can’t remember much crime, we’ll guess that we have a low crime rate. We use our memory as a proxy for reality. It’s simple and probably not wholly wrong. It’s good enough.
Let me ask you another simple question: How dangerous is coronavirus?
It’s a tough question. We can’t possibly know the “right” answer. Even the experts can’t figure it out. So, how does our mind work on a tough question like this?
First, we use our memory as a proxy for reality. How top of mind is coronavirus? How available is it to our memory? (This, as you might guess, is known as the availability bias). Our media is saturated with stories about coronavirus. We see it every day. It’s easy to recall from memory. Must be a big deal.
Second, the media will continue to focus on coronavirus for several more months (at least). In the beginning, the media focused on the disease itself. Now, the media is more likely to focus on secondary effects – travel restrictions, quarantines, etc. Soon, the media will focus on reactions to the virus. Protesters will march on Washington demanding immediate action to protect us. The media will cover it.
The media activity is known as an availability cascade. The story keeps cascading into new stories and new angles on the same old story. The cascade keeps the story top of mind. It remains easily available to us. When was the last time we had a huge availability cascade? Think back to 2014 and the Ebola crisis. Sound familiar?
Third, our minds will consider how vivid the information is. How scary is it? How creepy? We remember vicious or horrific crimes much better than we remember mundane crimes like Saturday night stickups. How vivid is coronavirus? We see pictures everyday of workers in hazmat suits. It’s vivid.
Fourth, what are other people doing? When we don’t know how to act in a given situation, we look for cues from our fellow humans. What do we see today? Pictures of empty streets and convention centers. We read that Chinatown in New York is empty of tourists. People are afraid. If they’re afraid, we probably should be, too.
Fifth, how novel is the situation? We’re much more afraid of devils we don’t know than of devils that we do know. The coronavirus – like the Ebola virus before it – is new and, therefore, unknowable. Health experts can reassure us but, deep in our heart of hearts, we know that nobody knows. We can easily imagine that it’s the worst-case scenario. It could be the end of life as we know it.
Sixth, is it controllable? We want to think that we can control the world around us. We study history because we think that knowing the past will help us control the future. If something scary is out of our control, we will spare no expense to bring it back under control. Even a small scare – like the Three Mile Island incident – can produce a huge reaction. At times, it seems that the cure may be worse than the disease.
What to do? First, let’s apply some contextual thinking – both current and historical.
So, what to do? You’re much more likely to succumb to plain old ordinary flu than you are to be infected by coronavirus. So, get a flu shot. Then do what the old British posters from World War II told us all to do: Keep calm and carry on.
I recently saw an ad for Progressive Insurance that says, “Drivers who save with Progressive, save $796 on average.”
Now I like Progressive. And I love Flo. So, I’m sure that the statement is true. I’m sure it’s based on fact.
But it also entails a logical fallacy. If you don’t spot the fallacy, you may easily assume that the average savings for all drivers who switch to Progressive is $796. That would be a mistake.
This is a good example of the survivorship fallacy. We only examine cases that “survive” a certain threshold. In this case, the threshold is drivers who save. What about drivers who didn’t save?
Let’s say that we have 1,000 drivers who saved money. In fact, they saved a total of $796,000. On average, they saved $796 each.
Now let’s say that another 1,000 drivers saved nothing. Now we have 2,000 drivers who saved a total of $796,000. On average, they saved $398 each.
When we consider those people (or cases) that didn’t survive the threshold, the numbers change dramatically. You might hear an investment company say, “Investors who have stayed with us for ten years, made an average of 7.3% per year.” The threshold is stayed with us for ten years. Your question should be, “Well, what about those who didn’t stay for ten years?”
The survivorship fallacy doesn’t just affect numbers; it also affects qualities. Let’s say a prominent management journal publishes an article that proclaims, “The Ten Most Innovative Companies In The World Do These Three Things.” The threshold for selection is the ten most innovative companies (however that is measured). It’s quite possible that many other companies do the same three things but aren’t nearly as innovative. Since they didn’t survive the selection criterion, however, we don’t consider them.
What’s the moral? When you see an ad, put your critical thinking cap on. You’re going to need it.
That’s the upshot of a study recently published in American Psychologist. The authors, Shigehiro Oishi, Minkyung Koo, and Nicholas Buttrick, correlated neighborhood walkability with intergenerational upward social mobility. The basic finding: kids who grow up in walkable neighborhoods are more likely to move upward – as compared to their parents – than kids who don’t live in walkable neighborhoods.
The authors begin by noting that “Although upward mobility is generally in decline in the United States …it is easier to get ahead in some parts of the United Sates than in others.” Kids growing up in Pittsburgh, for instance, are much more likely to rise from the bottom 20% (as kids) to the top 20% (as adults) than are kids growing up in Charlotte, North Carolina.
Why would that be? Previous research had identified five factors associated with socioeconomic fluidity for a given area. These are “1) less residential segregation, 2) less income inequality, 3) better primary schools, 4) greater social capital, and 5) greater family stability.” (Social capital is a measure of community participation, including the proportion of people in a given area who vote, volunteer, and otherwise engage in community activities).
Oishi, Koo, and Buttrick accept the “Five Factors” and ask an additional question: does the walkability of a neighborhood also contribute to social fluidity? The authors conducted four different studies to answer this question. Here are some of the top-level findings.
This study raises bigger, broader questions as well. Numerous commentators have noted that upward mobility in the United States has declined precipitously over the past 50 to 75 years. Baby boomers may be the last generation to do broadly better than their parents.
This time frame corresponds to the growth of suburbs and our increasing dependence on cars. We can surmise that more people today live in non-walkable areas than they did, say, in 1950. Perhaps this migration explains why upward mobility is declining. As we spread out horizontally, we grow isolated and have less sense of belonging. Though the automobile is a vehicle for geographic mobility, it may well be an obstacle to social mobility.
In the course of writing this article, I discovered a great website: walkscore.com. The site provides walkability ratings on a scale of 0 to 100. For instance, our home in Denver gets a walkability rating of 51. We can walk to some restaurants and are close to some pretty good public transportation. By contrast, our little apartment in Brooklyn gets a walkability score of 99. We could easily live there without a car. Check it out. It may change the way you view your neighborhood.
All humans want to connect the dots. We want to explain why something happened – or will happen — by linking events through time. A caused B caused C and that will cause D. When we do this in the past, we call it history. When we project it into the future, we call it politics.
We want to connect the dots because we deeply desire a sense of control. If we can explain why something happened in the past, we believe that we can control it in the future. If X causes Y and we don’t want Y to happen again in the future, then we can work very hard to eliminate X. We can control the future because we can explain the past in mechanistic terms.
The past, however, is a very rich source of causes and effects. It may be that X causes Y in some circumstances. In other cases, perhaps P causes Y. In still other cases, the combination of X, P, and Z causes Y – but only if X, P, and Z occur in a certain order. If we look hard enough, we can use history to prove anything we want. Liberal historians find liberal causes. Conservative historians find conservative causes. To paraphrase Ernest Rutherford, this isn’t physics, it’s stamp collecting. One side collects red stamps; the other side collects blue stamps.
We often underestimate the role of chance in the shaping of events. As Hans Zinsser pointed out in Rats, Lice, and History, a lot of stuff happens by accident and stupidity. The right person is in the right place at the right time and we have a victory. The right person is not in the right place at the right time and we have a tragedy. Stuff happens for no apparent reason.
Yet we still have a need for control. So we make stuff up. The fancy term for this is confabulation. Wikipedia defines confabulation as a “memory error defined as the production of fabricated, distorted, or misinterpreted memories about oneself or the world, without the conscious intention to deceive.” It’s not a lie; it’s an illusion.
We used to think that confabulation was a sign of mental illness. Today, we believe that every human does it. It’s a simple way to deal with a reality that we can’t explain or control.
The French philosopher, Henri Bergson, developed the related idea of “retrospective illusion”. As we “…consider our actions in the past, we have the illusion that they could not have developed in any other way. At the moment, however, our actions seem indeterminate.” We see the past as an eternal chain of causes and effects that could not have happened in any other way. According to Bergson, it’s an illusion. We see the path of history clearly. What we don’t see is how it might have lurched in a different direction because of some random event. (Bergson’s concept is also known as retrospective determinism).
Even the best histories by the best thinkers must necessarily omit most of reality. As Mark Twain wrote, ““In the real world, the right thing never happens in the right place and the right time. It is the job of journalists and historians to make it appear that it has.” We simplify the real world so we can comprehend it and control it. As we simplify, we also make mistakes. Perhaps Hegel (pictured) was right: “History teaches us nothing except that it teaches us nothing.”
Let’s say we have an election and 20 precincts report their results. Here’s the total number of votes cast in each precinct:
3271 2987 2769 3389
2587 3266 4022 4231
3779 3378 4388 5327
2964 2864 2676 3653
3453 4156 3668 4218
Why would you suspect fraud?
Before you answer that, let me ask you another question. Would you please write down a random number between one and 20?
Asking you to write down a random number seems like an innocent request. But the word “random” invokes some unusual behavior. It turns out that we all have in our minds a definition of “random” that’s not quite … well, random. Does the number 17 seem random to you? Most people would say, “Sure. That’s pretty random.” Do the numbers 10 and 15 seem random to you? Most people would say, “No. Those aren’t random numbers.”
Why do we have a bias against 10 and 15? Why do we say they aren’t random? Probably because we often round our numbers so that they end in zeros or fives. We say, “I’ll see you in five minutes (or 10 minutes or 15 minutes)”. We rarely say, “I’ll see you in 17 minutes”. In casual conversation, we use numbers that end in zeros or fives far more often than we use numbers that end in other digits. Because we use them frequently, they seem familiar, not random.
So, if we want numbers to look random – as we might in a fraud – we’ll create numbers that fit our assumptions of what random numbers look like. We’ll under-represent numbers that end in fives and zeros and over-represent numbers that end in sevens or threes or nines. But if the numbers are truly random, then all the digits zero through nine should be equally represented.
Now look again at the reported numbers from the precincts. What’s odd is what’s missing. None of the twenty numbers end in five or zero. But if the numbers were truly random, we would expect – in a list of 20 — at least two numbers to end in zero and two more to end in five. The precinct numbers are suspicious. Somebody was trying to make the numbers look random but tripped over their own assumptions about what random numbers look like.
Moral of the story? If you’re going to cheat, check your assumptions at the door.
By the way, I ask my students to write down a random number between one and 20. The most frequent number is 17, followed by 3, 13, 7, and 9. There is a strong bias towards odd numbers and whole numbers. No one has ever written down a number with a fraction.