In my critical thinking class, we begin by studying 17 cognitive biases that are drawn from Peter Facione’s excellent textbook, Think Critically. (I’ve also summarized these here, here, here, and here). I like the way Facione organizes and describes the major biases. His work is very teachable. And 17 is a manageable number of biases to teach and discuss.
While the 17 biases provide a good introduction to the topic, there are more biases that we need to be aware of. For instance, there’s the survivorship bias. Then there’s swimmer’s body fallacy. And the Ikea effect. And the self-herding bias. And don’t forget the fallacy fallacy. How many biases are there in total? Well, it depends on who’s counting and how many hairs we’d like to split. One author says there are 25. Another suggests that there are 53. Whatever the precise number, there are enough cognitive biases that leading consulting firms like McKinsey now have “debiasing” practices to help their clients make better decisions.
The ultimate list of cognitive biases probably comes from Wikipedia, which identifies 104 biases. (Click here and here). Frankly, I think Wikipedia is splitting hairs. But I do like the way Wikipedia organizes the various biases into four major categories. The categorization helps us think about how biases arise and, therefore, how we might overcome them. The four categories are:
1) Biases that arise from too much information – examples include: We notice things already primed in memory. We notice (and remember) vivid or bizarre events. We notice (and attend to) details that confirm our beliefs.
2) Not enough meaning – examples include: We fill in blanks from stereotypes and prior experience. We conclude that things that we’re familiar with are better in some regard than things we’re not familiar with. We calculate risk based on what we remember (and we remember vivid or bizarre events).
3) How we remember – examples include: We reduce events (and memories of events) to the key elements. We edit memories after the fact. We conflate memories that happened at similar times even though in different places or that happened in the same place even though at different times, … or with the same people, etc.
4) The need to act fast – examples include: We favor simple options with more complete information over more complex options with less complete information. Inertia – if we’ve started something, we continue to pursue it rather than changing to a different option.
It’s hard to keep 17 things in mind, much less 104. But we can keep four things in mind. I find that these four categories are useful because, as I make decisions, I can ask myself simple questions, like: “Hmmm, am I suffering from too much information or not enough meaning?” I can remember these categories and carry them with me. The result is often a better decision.
What made Leonardo da Vinci a genius? I’ve just finished Walter Isaacson’s all-purpose biography of the Italian Renaissance man and four words come to mind: curiosity, observation, analogy, and humility. These four traits combined and recombined throughout Leonardo’s life to create advances in painting, engineering, architecture, anatomy, hydrology, optics, weaponry, and theater.
Leonardo maintained a childlike curiosity throughout his life. Every kid wants to know why the sky is blue. As we reach adulthood, most of us simply accept the fact that the sky is indeed blue. Not Leonardo. He studied the question for much of his adult life and ultimately developed an explanation based on clear evidence and careful reasoning.
Leonardo combined his curiosity with powers of observation that beggar belief. He noticed, for instance, that dragonflies have four wings, two forward and two aft. He wondered how they worked. By close observation, he concluded that when the two forward wings go up, the two aft wings go down. It seems like a simple observation but it must have taken hours of acute and disciplined observation.
Similarly, he took an interest in how birds flap their wings. Do their wings move faster as they flap upward or downward? Most of us would not conceive of such a question, much less focus our attention sufficiently to answer it. But Leonardo did, not once but several times.
His first observations – taken over many hours and days – suggested that birds flap their wings faster on the down stroke. At this point, I would have recorded my observation, concluded that all birds behaved similarly, and moved on. But Leonardo persisted. He observed different bird species and found that some flap faster on the up stroke than the down stroke. Wing behavior differs by species. It’s an interesting observation about birds. It’s a fascinating observation about Leonardo. Arthur Bloch once said that, “A conclusion is the place where you get tired of thinking.” Leonardo, apparently, never got tired of thinking. He recognized that all conclusions are tentative.
Leonardo didn’t just study insects and birds. He studied pretty much everything – from rivers to trees to pumps to blood circulation to optical effects to anatomy to geometry to Latin to … well, whatever caught his eye. His breadth of knowledge allowed him to draw analogies that others would have missed. Some of his analogies seem “obvious” to us today – like the analogy between water flow in rivers and blood flow in humans.
But how about the analogy between arteriosclerosis and oranges? As he dissected cadavers, Leonardo noticed that older people’s arteries were often less flexible and more clogged that those of younger people. He became the first person in history to accurately describe arteriosclerosis. He did so by comparing it to the rind of an orange that dries, stiffens, and thickens as it ages.
In today’s world, expertise is often associated with narrowness. Experts gain ever more knowledge about ever-narrower subjects. Leonardo was certainly an expert in several narrow domains. But he also saw across domains and could think outside the frame. Philip Tetlock, an authority on political judgment, suggests that this ability to think across boundaries is a key ingredient to insight and discovery. In many ways, Leonardo thought more like a fox and less like a hedgehog.
If I could think and observe like Leonardo, I might become a bit of a prima donna. But Leonardo never did. He remained humble and diffident in an ever-changing world. As Isaacson notes, he “… relished a world in flux.” Further, “When he came up with an idea, he devised an experiment to test it. And when his experience showed that a theory was flawed … he abandoned his theory and sought a new one.” By revising and updating his own conclusions, Leonardo helped establish a world in which observation and experience trump dogma and received wisdom.
Leonardo’s legacy is so broad and so varied that it’s difficult to encapsulate succinctly. But Isaacson includes a quote the German philosopher, Schopenhauer, that helps us understand his impact: “Talent hits a target that no one else can hit. Genius hits a target that no one else can see.”
Aristotle defined rhetoric as the ability to “see the available means of persuasion”. In other words, what will it take to persuade the audience to agree with your proposal? It may be an eloquent speech. It may be a brief video. It may be a nice bouquet of flowers. We aim to understand the dynamics of the situation and select the best available means of gaining agreement. To find the best persuasive approach, Cicero said that we need to consider five principles: Invention, Arrangement, Style, Memory, and Delivery. (Click here for brief definitions of each).
Many books on rhetoric present Cicero’s five canons rather formally. They may seem forbidding and perhaps somewhat outdated. But the canons are actually quite useful in finding the best available means of persuasion. To understand the canons and use them effectively, it helps to think of the questions each canon raises.
Let’s begin with the first canon: invention. We seek to invent the most persuasive argument for a given audience. Here are the questions to consider.
Remember that you’re just trying to invent the argument at this point. There are many more questions to ask round out a persuasive argument. If you can answer these questions, however, you can greatly enhance your chances of success.
Let’s say that Suellen and I have an argument and I notice that all the verbs are in the past tense. According to Aristotle, the verbs tell us that the argument is about blame. I may think it’s about who left the door unlocked or forgot to pay the mortgage. But it’s really about blame.
Let’s also say that I win that argument. (This is very hypothetical). I’ve successfully pushed the blame away from myself and on to her. It’s not easy to win an argument, so I do a little victory dance. Meanwhile, how does Suellen feel? Probably a mixture of emotions – irritation, annoyance, anger, … perhaps even a desire to get even. Suellen is the woman I love. Why on earth would I want her to feel like that? That’s the problem with arguing in the past tense. Even if you win, you lose.
Arguing in the past tense is generally known as forensic rhetoric. In many legal situations, we do want to lay blame. We want to establish guilt and make sure that the appropriate person is appropriately punished. Most of the testimony in a trial is in the past tense. Similarly, characters in crime dramas speak almost exclusively in the past tense. The goal is to lay blame and Aristotle and others give us rules for how to argue the point.
Outside of the courtroom, however, arguing in the past tense is essentially useless. We can’t do anything about the past. We can’t change it. We can’t enhance it. We can lay blame but, even then, we will argue endlessly about whether we got it right or not. Did we blame the right person? If so, did we blame them for the right reasons? Did we learn the right lessons? Did history teach us anything? Or did it teach us nothing?
The next time you’re in an argument, notice the verbs. If they’re in the past tense, you’re simply trying to blame the other person. Does it do any good to “win” such an argument? Nope. By “winning”, you just give the other side motivation to come back stronger next time. This is how feuds get started. The Stoic philosopher, Epictetus, had it right: “Small-minded people blame others. Average people blame themselves. The wise see all blame as foolishness.”
Police make about 10 million arrests every year in the United States. In many cases, a judge must then make a jail or bail decision. Should the person be jailed until the trial or can he or she be released on bail? The judge considers several factors and predicts how the person will behave. There are several relevant outcomes if the person is released:
A person in Category 1 should be released. People in Categories 2 and 3 should be jailed. Two possible error types exist:
Type 1 – a person who should be released is jailed.
Type 2 – a person who should be jailed is released.
Jail, bail, and criminal records are public information and researchers can massively aggregate them. Jon Kleinberg, a professor of computer science at Cornell, and his colleagues did exactly that and produced a National Bureau of Economic Research Working Paper earlier this year.
Kleinberg and his colleagues asked an intriguing question: Could a machine-learning algorithm, using the same information available to judges, reach different decisions than the human judges and reduce either Type 1 or Type 2 errors or both?
The simple answer: yes, a machine can do better.
Klein and his colleagues first studied 758,027 defendants arrested in New York City between 2008 and 2013. The researchers developed an algorithm and used it to decide which defendants should be jailed and which should be bailed. There are several different questions here:
The answer to the first question is very clear: the algorithm produced decisions that varied in important ways from those that the judges actually made.
The algorithm also produced significant societal benefits. If we wanted to hold the crime rate the same, we need only have jailed 48.2% of the people who were actually jailed. In other words, 51.8% of those jailed could have been released without committing additional crimes. On the other hand, if we kept the number of people in jail the same – but changed the mix of who was jailed and who was bailed – the algorithm could reduce the number of crimes committed by those on bail by 75.8%.
The researchers replicated the study using nationwide data on 151,461 felons arrested between 1990 and 2009 in 40 urban counties scattered around the country. For this dataset, “… the algorithm could reduce crime by 18.8% holding the release rate constant, or holding the crime rate constant, the algorithm could jail 24.5% fewer people.”
Given the variables examined, the algorithm appears to make better decisions, with better societal outcomes. But what if the judges are acting on other variables as well? What if, for instance, the judges are considering racial information and aiming to reduce racial inequality? The algorithm would not be as attractive if it reduced crime but also exacerbated racial inequality. The researchers studied this possibility and found that the algorithm actually produces better racial equity. Most observers would consider this an additional societal benefit.
Similarly, the judges may have aimed to reduce specific types of crime – like murder or rape – while de-emphasizing less violent crime. Perhaps the algorithm reduces overall crime but increases violent crime. The researchers probed this question and, again, the results were negative. The algorithm did a better job of reducing all crimes, including very violent crimes.
What’s it all mean? For very structured predictions with clearly defined outcomes, an algorithm produced by machine learning can produce decisions that reduce both Type I and Type II errors as compared to decisions made by human judges.
Does this mean that machine algorithms are better than human judges? At this point, all we can say is that algorithms produce better results only when judges make predictions in very bounded circumstances. As the researchers point out, most decisions that judges make do not fit this description. For instance, judges regularly make sentencing decisions, which are far less clear-cut than bail decisions. To date, machine-learning algorithms are not sufficient to improve on these kinds of decisions.
(This article is based on NBER Working Paper 23180, “Human Decisions and Machine Predictions”, published in February 2017. The working paper is available here and here. It is copyrighted by its authors, Jon Kleinberg, Himabindu Lakkaraju, Jure Lesovec, Jens Ludwig, and Sendhil Mullainathan. The paper was also published, in somewhat modified form, as “Human Decisions and Machine Predictions” in The Quarterly Journal Of Economics on 26 August 2017. The paper is behind a pay wall but the abstract is available here).