Strategy. Innovation. Brand.

Daniel Kahneman

Pre-Suasion: Influence Before Influence

Trust me.

Trust me.

In Tin Men, Richard Dreyfus and Danny De Vito play two salesmen locked in bitter competition as they sell aluminum siding to householders in Baltimore. The movie is somewhat forgettable, but it offers a master class in sales techniques.

In one scene, Dreyfus knocks on a prospective customer’s door while also dropping a five-dollar bill on the doormat. When the customer opens the door, Dreyfus picks up the bill and says, “Wow. I just found this on your doormat. It’s not mine. It must be yours.” Somewhat confused, the homeowner accepts the bill and invites Dreyfus inside where he makes a big sale.

Robert Cialdini would call Dreyfus’ maneuver a good example of pre-suasion. Before Dreyfus even introduces himself, he has already done something to show that he’s a stand-up guy. He has earned some trust.

Cialdini himself gained our trust in his first book, Influence, which details six “weapons of influence”: reciprocity, consistency, social proof, liking, authority, and scarcity. In his new book, Pre-Suasion, he invites us to look at what happens before we deploy our weapons.

Pre-suasion is not a new idea. It’s at least as old as the traditional advice: Do a favor before asking for a favor. Like Dreyfus, however, Cialdini seems like a stand-up guy so we go along for the read. It’s a good idea because the book is chock full of practical advice on how to set the stage for persuasion.

A key idea is the “attention chute”. When we focus our attention on something, we don’t see anything else. The opportunity cost of paying attention is inattentional blindness. Thus, we don’t consider other alternatives. If our attention is focused on globalization, we may not notice how many jobs are eliminated by automation.

As Cialdini points out, the attention chute makes us suckers for palm readers. A palm reader says, “Your palm suggests that you’re a very stubborn person. Is that true?” We focus on the idea of stubbornness and search our memory banks for examples. We don’t think about the opposite of stubbornness and we don’t search for examples of it. It’s almost certain that we can find some examples of stubbornness in our memories. How could the palm reader have possibly known?

The attention chute is also known as the focusing illusion. We believe that what we focus on is important, but it may just be an illusion. If we’re focused on it, it must be important, right? It’s a cognitive bias that a palm reader or aluminum siding salesman can easily manipulate.

What’s the best defense? It’s a good idea to keep Daniel Kahneman’s advice in mind: “Nothing in life is as important as you think it is, while you are thinking about it.” If the media is filled with horror stories about the Ebola virus, you’ll probably think it’s important. But really, it’s not nearly as important as you think it is while you’re thinking about it.

Cialdini takes the attention chute one step further with the idea that “what’s focal is causal.” We assume that what we focus on is not just important; it’s also the cause of whatever we’re focused on. As Cialdini notes, economists think that the exchange of money is the cause of many transactions. But maybe not. Maybe there’s another reason for the transaction. Maybe the money is just a side benefit, not the motivating cause.

The idea that focal-is-causal has many complications. For example, the first lots identified in the famous Tylenol cyanide attacks were numbers 2880 and 1910. The media broadcast the numbers far and wide and many of us used them to play the lottery. They must be important, right?

Focal-is-causal can also lead to false confessions. The police focus on a person of interest and convince themselves that she caused the crime. (This is also known as satisficing or temporizing). They then use all the tricks in the book to convince her of the same thing.

Cialdini is a good writer and has plenty of interesting stories to tell. If you like Daniel Kahneman or Dan Ariely or Jordan Ellenberg or the brothers Heath, you’ll like his book as well. And who knows? It may even help you beat the rap when the police are trying to get a confession out of you.

Reminiscence Bumps and Helicopter Parents

It's the reminiscence bump.

It’s the reminiscence bump.

If you ask someone over the age of thirty to tell you their life story, they’ll over-emphasize some portions and under-emphasize others. Most likely they’ll recall incidents in their late teens and early twenties much more vividly than other periods of their lives. What happens in our thirties stays in our thirties. What happens in our formative years stays with us forever.

It’s known as the reminiscence bump and social scientists have been researching it since the early 1980s. Activities and events that occur in late adolescence and early adulthood leave an indelible mark on our memories. As Katy Waldman puts it, …”there is something deeply, weirdly meaningful about this period.”

Nobody knows quite why the reminiscence bump occurs. Dan McAdams, writing in the Review of General Psychology, associates it with the formation of identity. As we enter adolescence, many different identities are available to us. We could become nerds. Or athletes. Or scholars. Or criminals (especially those with low heart rates). As McAdams points out, William James called this the “one-in-many-selves paradox”.

Yet we generally emerge form adolescence with one more-or-less integrated identity. We want that identity to be coherent. Indeed, there are multiple types of coherent, including biographical coherence, causal coherence, thematic coherence, and temporal coherence. McAdams surmises that integrating multiple potential stories into one coherent identity is a formative life experience that creates long lasting memories.

The articles I’ve read focus on what causes the reminiscence bump. I’m also interested in what the reminiscence bump causes. We believe that the bump is universal; we all have it. Does the fact that we remember our formative years better than other years affect our behavior in later life?

I’ve written previously about the availability bias. As Daniel Kahneman has pointed out, humans are not naturally good at statistics. We have difficulty answering questions dealing with probability. So we substitute a simpler question and answer it.

For instance, let’s say someone asks us, “How likely is it that someone will burglarize your house while you’re away for the weekend?” We have no idea what the probabilities are or even how to calculate them. So we answer a simpler question: “How easy is it for me to remember stories of friends’ houses being burglarized?” If it’s easy to remember such stories, we estimate that the probability is high. If it’s difficult, we estimate that the probability is low. (This is sometimes known as the vividness bias – vivid events are easy to recall from memory).

What events are easy for us to recall from our life histories? Compared to all other events, the reminiscence bump suggests that events from adolescence and early adulthood are easiest to recall. The availability bias suggests that we will overestimate the probability that similar events will happen in the future. We can recall them easily. Therefore, we assume they’re highly probable to recur.

Now, consider the adolescent brain. According to the National Institute of Mental Health, it’s “…still under construction.” We tend to engage in riskier behaviors in our teenage years because our executive function is not fully developed. As most of us can well remember, we do stupid things.

What do we disproportionately remember about our lives? The risky and thoughtless behaviors of our formative years. If the availability bias is correct, we will overestimate the probability that these same behaviors will occur again, perhaps in our children. Could this be the root cause of the helicopter parenting that we seem so worried about today? It’s a complicated question but it’s certainly worth a good research project.

I’m A Better Person In Spanish

Me llamo Travieso Blanco.

Me llamo Travieso Blanco.

I speak Spanish reasonably well but I find it very tiring … which suggests that I probably think more clearly and ethically in Spanish than in English.

Like so many things, it’s all related to our two different modes of thinking: System1 and System 2. System 1 is fast and efficient and operates below the level of consciousness. It makes a great majority of our decisions, typically without any input from our conscious selves. We literally make decisions without knowing that we’re making decisions.

System 2 is all about conscious thought. We bring information into System 2, think it through, and make reasoned decisions. System 2 uses a lot of calories; it’s hard work. As Daniel Kahneman says, “Thinking is to humans as swimming is to cats; they can do it but they’d prefer not to.”

English, of course, is my native language. (American English, that is). It’s second nature to me. It’s easy and fluid. I can think in English without thinking about it. In other words, English is the language of my System 1. At this point in my life, it’s the only language in my System 1 and will probably remain so.

To speak Spanish, on the other hand, I have to invoke System 2. I have to think about my word choice, pronunciation, phrasing, and so on. It’s hard work and wears me out. I can do it but I would have to live in Spain for a while for it to become easy and fluid. (That’s not such a bad idea, is it?)

You may remember that System 1 makes decisions using heuristics or simple rules of thumb. System 1 simplifies everything and makes snap judgments. Most of the time, those judgments are pretty good but, when they’re wrong, they’re wrong in consistent ways. System 1, in other words, is the source of biases that we all have.

To overcome these biases, we have to bring the decision into System 2 and consider it rationally. That takes time, effort, and energy and, oftentimes, we don’t do it. It’s easy to conclude that someone is a jerk. It’s more difficult to invoke System 2 to imagine what that person’s life is like.

So how does language affect all this? I can only speak Spanish in my rational, logical, conscious System 2. When I’m thinking in Spanish, all my rational neurons are firing. I tend to think more carefully, more thoughtfully, and more ethically. It’s tiring.

When I think in English, on the other hand, I could invoke my System 2 but I certainly don’t have to. I can easily use heuristics in English but not in Spanish. I can jump to conclusions in English but not in Spanish.

The seminal article on this topic was published in 2012 by three professors from the University of Chicago. They write, “Would you make the same decisions in a foreign language as you would in your native tongue? It may be intuitive that people would make the same choices regardless of the language they are using…. We discovered, however, that the opposite is true: Using a foreign language reduces decision-making biases.”

So, it’s true: I’m a better person in Spanish.

Three Decision Philosophies

I'll use the rational, logical approach for this one.

I’ll use the rational, logical approach for this one.

In my critical thinking classes, students get a good dose of heuristics and biases and how they affect the quality of our decisions. Daniel Kahneman and Amos Tversky popularized the notion that we should look at how people actually make decisions as opposed to how they should make decisions if they were perfectly rational.

Most of our decision-making heuristics (or rules of thumb) work most of the time but when they go wrong, they do so in predictable and consistent ways. For instance, we’re not naturally good at judging risk. We tend to overestimate the risk of vividly scary events and underestimate the risk of humdrum, everyday problems. If we’re aware of these biases, we can account for them in our thinking and, perhaps, correct them.

Finding that our economic decisions are often irrational rather than rational has created a whole new field, generally known as behavioral economics. The field ties together concepts as diverse as the availability bias, the endowment effect, the confirmation bias, overconfidence, and hedonic adaptation to explain how people actually make decisions. Though it’s called economics, the basis is psychology.

So does this mean that traditional, rational, statistical, academic decision-making is dead? Well, not so fast. According Justin Fox’s article in a recent issue of Harvard Business Review, there are at least three philosophies of decision-making and each has its place.

Fox acknowledges that, “The Kahneman-Tversky heuristics-and-biases approach has the upper hand right now, both in academia and in the public mind.” But that doesn’t mean that it’s the only game in town.

The traditional, rational, tree-structured logic of formal decision analysis hasn’t gone away. Created by Ronald Howard, Howard Raiffa, and Ward Edwards, Fox argues that the classic approach is best suited to making “Big decisions with long investment horizons and reliable data [as in] oil, gas, and pharma.” Fox notes that Chevron is a major practitioner of the art and that Nate Silver, famous for accurately predicting the elections of 2012, was using a Bayesian variant of the basic approach.

And what about non-rational heuristics that actually do work well? Let’s say, for instance, that you want to rationally allocate your retirement savings across N different investment options. Investing evenly in each of the N funds is typically just as good as any other approach. Know as the 1/N approach, it’s a simple heuristic that leads to good results. Similarly, in choosing between two options, selecting the one you’re more familiar with usually creates results that are no worse than any other approach – and does so more quickly and at much lower cost.

Fox calls this the “effective heuristics” approach or, more simply, the gut-feel approach. Fox suggests that this is most effective, “In predictable situations with opportunities for learning, [such as] firefighting, flying, and sports.” When you have plenty of practice in a predictable situation, your intuition can serve you well. In fact, I’d suggest that the famous (or infamous) interception at the goal line in this year’s Super Bowl resulted from exactly this kind of thinking.

And where does the heuristics-and-biases model fit best? According to Fox, it helps us to “Design better institutions, warn ourselves away from dumb mistakes, and better understand the priorities of others.”

So, we have three philosophies of decision-making and each has its place in the sun. I like the heuristics-and-biases approach because I like to understand how people actually behave. Having read Fox, though, I’ll be sure to add more on the other two philosophies in upcoming classes.

Do Smartphones Make Us Smarter, Dumber, Or Happier?

So which is it?

Smartphones:

  1. Make you lazy and dumb.
  2. Make the world more intelligent by adding massive amounts of new processing power.
  3. Both of the above.
  4. None of the above.

Smartphones have an incredible impact on how we live and communicate. They also illustrate a popular technology maxim: If it can be done, it will be done. In other words, they’re not going away. They’ll grow smaller and stronger and will burrow into our lives in surprising ways. The basic question: are they making humans better or worse?

Smarter or dumber?

Smarter or dumber?

Researchers at the University of Waterloo in Canada recently published a paper suggesting that smartphones “supplant thinking”. The researchers suggest that humans are cognitive misers — we conserve our cognitive resources whenever possible. We let other people – or devices – do our thinking for us. We make maximum use of our extended mind. Why use up your brainpower when your extended mind – beyond your brain – can do it for you? (The original Waterloo paper is here. Less technical summaries are here and here).

Though the researchers don’t use Daniel Kahneman’s terminology, there is an interesting correlation to System 1 and System 2. They write that, “…those who think more intuitively and less analytically [i.e. System 1] when given reasoning problems were more likely to rely on their Smartphones (i.e., extended mind) ….” In other words, System 1 thinkers are more likely to offload.

So, we use our phones to offload some of our processing. Is that so bad? We’ve always offloaded work to machines. Thinking is a form of work. Why not offload it and (potentially) reduce our cognitive load and increase our cognitive reserve? We could produce more interesting thoughts if we weren’t tied down with the scut work, couldn’t we?

Clay Shirky was writing in a different context but that’s the essence of his concept of cognitive surplus. Shirky argues that people are increasingly using their free time to produce ideas rather than simply to consume ideas. We’re watching TV less and simultaneously producing more content on the web. Indeed, this website is an example of Shirky’s concept. I produce the website in my spare time. I have more spare time because I’ve offloaded some of my thinking to my extended mind. (Shirky’s book is here).

Shirky assumes that creating is better than consuming. That’s certainly a culturally nuanced assumption, but it’s one that I happen to agree with. If it’s true, we should work to increase the intelligence of the devices that surround us. We can offload more menial tasks and think more creatively and collaboratively. That will help us invent more intelligent devices and expand our extended mind. It’s a virtuous circle.

But will we really think more effectively by offloading work to our extended mind? Or, will we forevermore watch reruns of The Simpsons?

I’m not sure which way we’ll go, but here’s how I’m using my smartphone to improve my life. Like many people, I consult my phone almost compulsively. I’ve taught myself to smile for at least ten seconds each time I do. My phone reminds me to smile. I’m not sure if that’s leading me to higher thinking or not. But it certainly brightens my mood.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives