Strategy. Innovation. Brand.

Critical Thinking

1 2 3 50

How To Save Democracy

Back off! I know rhetoric.

Most historians would agree that the arts and sciences of persuasion – also known as rhetoric – originated with the Greeks approximately 2,500 years ago. Why there? Why not the Egyptians or the Phoenicians or the Chinese? And why then? What was going on in Greece that necessitated new rules for communication?

The simple answer is a single word: democracy. The Greeks invented democracy. For the first time in the history of the world, people needed to persuade each other without force or violence. So the Greeks had to invent rhetoric.

Prior to democracy, people didn’t need to disagree in any organized way. We simply followed the leader. We agreed with the monarch. We converted to the emperor’s religion. We believed in the gods that the priests proclaimed. If we disagreed, we were ignored or banished or killed. Simple enough.

With the advent of democracy, public life grew messy. We could no longer say, “You will believe this because the emperor believes it.” Rather, we had to persuade. The basic argument was simple, “You should believe this because it provides advantages.” We needed rules and pointers for making such arguments successfully. Socrates and Aristotle (and many others) rose to the challenge and invented rhetoric.

Democracy, then, is about disagreement. We recognize that we will disagree. Indeed, we recognize that we should disagree. The trick is to disagree without anger or violence. We seek to persuade, not to subdue. In fact, here’s a simple test of how democratic a society is:

What proportion of the population agrees with the following statement?

“Of course, we’re going to disagree. But we’ve agreed to resolve our disagreements without violence.”

It seems like a simple test. But we overlook it at our peril. Societies that can’t pass this test (and many can’t) are forever doomed to civil strife, violence, disruption, and dysfunction.

The chief function of rhetoric is to teach us to argue without anger. The fundamental questions of rhetoric pervade both our public and private lives. How can I persuade someone to see a different perspective? How can I persuade someone to agree with me? How can we forge a common vision?

Up through the 19th century, educated people were well versed in rhetoric. All institutions of higher education taught the trivium, which consisted of logic, grammar, and rhetoric. Having mastered the trivium, students could progress to the quadrivium – arithmetic, geometry, music, and astronomy. The trivium provided the platform upon which everything else rested.

In the 20th century, we saw the rise of mass communications, government sponsored propaganda, widespread public relations campaigns, and social media. Ironically, we also decided that we no longer needed to teach rhetoric. We considered it manipulative. To insult an idea, we called it “empty rhetoric”.

But rhetoric also helps us defend ourselves against mass manipulation, which flourished in the 20th century and continues to flourish today. (Indeed, in the 21st century, we seem to want to hone it to an even finer point). We sacrificed our defenses at the very moment that manipulation surged forward. Having no defenses, we became angrier and less tolerant.

What to do? The first step is to revive the arts of persuasion and critical thinking. Essentially, we need to revive the trivium. By doing so, we’ll be better able to argue without anger and to withstand the effects of mass manipulation. Reviving rhetoric won’t solve the world’s problems. But it will give us a tool to resolve problems – without violence and without anger.

Why Do Smarter People Live Longer?

What’s related to what?

A study published last week in the British Medical Journal states simply that, “Childhood intelligence was inversely associated with all major causes of death.”

The study focused on some 65,000 men and women who took the Scottish Mental Survey in 1947 at age 11. Those students are now 79 years old and many of them have passed away. By and large, those who scored lower on the test in 1947 were more likely to have died – from all causes — than those who registered higher scores.

This is certainly not the first study to link intelligence with longevity. (Click here, here, and here, for instance). But it raises again a fundamental question: why would smarter people live longer? There seem to be at least two competing hypotheses:

Hypothesis A: More intelligent people make better decisions about their health care, diet, exercise, etc. and — as a result — live longer.

Hypothesis B: whatever it is that makes people more intelligent also makes them healthier. Researchers, led by Ian Deary, call this hypothesis system integrity. Essentially, the theory suggests that a healthy system generates numerous positive outcomes, including greater intelligence and longer life. The theory derives from a field of study known as cognitive epidemiology, which studies the relationships between intelligence and health.

Hypothesis A focuses on judgment and decision making as causal factors. There’s an intermediate step between intelligence and longevity. Hypothesis B is more direct – the same factor causes both intelligence and longevity. There is no need for an intermediate cause.

The debate is oddly similar to the association between attractiveness and success. Sociologists have long noted that more attractive people also tend to be more successful. Researchers generally assumed that the halo effect caused the association. People judged attractive people to be more capable in other domains and thus provided them more opportunities to succeed. This is similar to our Hypothesis A – the result depends on (other people’s) judgment and there is an intermediate step between cause and effect.

Yet a recent study of Tour de France riders tested the notion that attractiveness and success might have a common cause. Researchers rated the attractiveness of the riders and compared the rankings to race results. They found that more attractive riders finished in higher positions in the race. Clearly, success in the Tour de France does not depend on the halo effect so, perhaps, that which causes the riders to be attractive may also cause them to be better racers.

And what about the relationship between intelligence and longevity? Could the two variables have a single, common cause? Perhaps the best study so far was published in the International Journal Of Epidemiology last year. The researchers looked at various twin registries and compared IQ tests with mortality. The study found a small (but statistically significant) relationship between IQ and longevity. In other words, the smarter twin lived longer. Though the effects are small, the researchers conclude, “The association between intelligence and lifespan is mostly genetic.”

Are they right? I’m not (yet) convinced. Though significant, the statistical relationship is very small with r = 0.12. As noted elsewhere, variance is explained by the square of r. So in this study, IQ explains only 1.44% (0.12 x 0.12 x 100) of the variance in longevity. That seems like weak evidence to conclude that the relationship is “mostly genetic”.

Still, we have some interesting research paths to follow up on. If the theory of system integrity is correct, it could predict a whole host of relationships, not just IQ and longevity. Attractiveness could also be a useful variable to study. Perhaps there’s a social aspect to it as well. Perhaps people who are healthy and intelligent also have a larger social circle (see also Dunbar’s number). Perhaps they’re more altruistic. Perhaps they are more symmetric. Ultimately, we may find that a whole range of variables depend partially – or perhaps mainly – on genetics.

Tetris On The Brain

Brain thickener

Remember Tetris? Originally released in Russia in 1984, it became the top selling video game of all time, with more than 495 million copies in circulation. It’s a simple game – different shaped tiles fall from the top of the screen and you arrange them in patterns at the bottom of the screen.

It seems like a simple-minded time killer. It’s not rocket science. But it turns out to have some interesting effects on our brains. Here are two recent studies.

Tetris and Intrusive Memories

Lets say you’re in a car accident. The trauma, whether physical or psychological, may result in intrusive memories, which are hallmarks of post-traumatic stress disorder (PTSD).

When an intrusive memory occurs, the survivor essentially relives the traumatic event. It seems plausible that reducing intrusive memories would help survivors manage stress and maintain metal health. So, how might one reduce intrusive memories or prevent their formation in the first place? How about Tetris?

This was the hypothesis of a study recently published in Molecular Psychiatry. Researchers recruited 71 subjects in an emergency room at a hospital in Oxford, England. The subjects had recently (less then six hours previously) experienced an automobile accident and were randomly assigned to one of two groups:

  • Experimental condition: play Tetris for 20 minutes in the emergency room
  • Control condition: describe their activities in the emergency room in a written activity log.

Researchers contacted the subjects one week and one month after the accident. The result? Subjects who played Tetris “recorded significantly fewer intrusive memories” and “reported less distress from intrusion symptoms.”

Tetris and Your Cortex

What about people who aren’t involved in a traumatic event? Does Tetris have an impact on them? This was the question asked several years ago in a study conducted by researchers at the University of New Mexico.

The researchers recruited 26 girls, aged 12 to 15 and randomly assigned them to the experimental group or the control group. The researchers taught the girls in the experimental group to play Tetris and coached them to play regularly. The girls in the control group were coached not to play Tetris. The researchers followed the two groups for three months. During that time the girls in the Tetris group played the game approximately 90 minutes per week.

At the end of three months, the Tetris-playing girls had a “significantly thicker cortex” than the non-Tetris-playing girls. The cortex is gray matter and is generally associated with higher-level brain functions such as memory, attention, and planning.

Does this mean that playing Tetris will make your smarter or your brain more efficient? Probably not. Playing Tetris probably only makes you better at playing Tetris. But it’s more evidence that the brain is plastic; you can change it by how you behave. It’s not surprising in a group of youngsters whose brains are not yet mature. It might be very telling to replicate the experiment with a group to oldsters to see just how plastic their brains are. My hypothesis: there’s still a lot of plasticity left.

So Tetris can teach us about brain plasticity and help suppress intrusive memories. Not bad for a free video game. I wonder what else it can do.

Managing Agreement: The Abilene Paradox.

I want to be a team player, but….

I used to think it was difficult to manage conflict. Now I wonder if it isn’t more difficult to manage agreement.

A conflicted organization is fairly easy to analyze. The signs are abundant. You can quickly identify the conflicting groups as well as the members of each. You can identify grievances simply by talking with people. You can figure out who is “us” and who is “them”. Solving the problem may prove challenging but, at the very least, you know two things: 1) there is a problem; 2) its general contours are easy to see.

When an organization is in agreement, on the other hand, you may not even know that a problem exists. Everything floats along smoothly. People may not quiver with enthusiasm but no one is throwing furniture or shouting obscenities. Employees work and things get done.

The problem with an organization in agreement is that many participants actually disagree. But the disagreement doesn’t bubble up and out. There are at least two scenarios in which this happens:

  1. The Abilene Paradox – in the original telling, four members of a family in Coleman, Texas drove 53 miles to Abilene in a car without air conditioning in 104-degree heat to have dinner at a crummy diner. After driving 53 miles back, they ‘fessed up: not one of them had wanted to go. Each person thought the others wanted to go. They agreed to be agreeable. (A variant of this is known as the risky shift).

Similar paradoxes arise in organizations all the time. Each employee wants to be seen as a team player. They may have reservations about a decision but — because everyone else agrees or seems to agree — they keep quiet. Perhaps nobody agrees to a given project but they believe that everyone else does. Perhaps nobody wants to work on Project X. Nevertheless, Project X persists. Unlike a conflicted organization, nobody realizes that a problem exists.

  1. Fear – in organizations where failure is not an option, employees work hard to salvage success even from doomed projects. Admitting that a project has failed invites punishment. Employees happily throw good money after bad, hoping to snatch victory from the jaws of defeat. Employees agree that failure must be delayed or hidden.

The second scenario is perhaps more dangerous but less common. A fear-based culture – if left untreated – will eventually corrupt the entire organization. Employees grow afraid of telling the truth. The remedy is easy to discern but hard to execute: the organization needs to replace executive management and create a new culture.

The Abilene paradox is perhaps less dangerous but far more common. Any organization that strives to “play as a team” or “hire team players” is at risk. Employees learn to go along with the team, even if they believe the team is wrong.

What can be done to overcome the Abilene paradox in an organization? Rosabeth Moss Kanter points out that there are two parts to the problem. First, employees make inaccurate assumptions about what others believe. Second, even though they disagree, they don’t feel comfortable speaking up. A good manager can work on both sides of the problem. Kanter suggests the following:

  • Debates – include an active debate in all decision processes. Choose sides and formally air out the pros and cons of a situation. (I’ve suggested something similar in the decision by trial process).
  • Assign devil’s advocates and give them the time and resources to develop a real position.
  • Encourage organizational graffiti – I think of this as the electronic equivalent of Hyde Park’s Speaker’s Corner – a place where people can get things off their chests.
  • Make confronters into heroes — even if you disagree with the message, reward the process.
  • Develop a culture of pride – build collective self-esteem, not just individual self-esteem. We’re proud of what we have, including the right (or even the obligation) to disagree.

The activities needed to ward off the Abilene paradox are not draconian. Indeed, they’re fairly easy to implement. But you can only implement them if you realize that a problem exists. That’s the hard part.

Debiasing and Corporate Performance

Loss aversion bias? Or maybe I’m just satisficing?

Over the past several years, I’ve written several articles about cognitive biases. I hope I have alerted my readers to the causes and consequences of these biases. My general approach is simple: forewarned is forearmed.

I didn’t realize that I was participating in a more general trend known as debiasing. As Wikipedia notes, “Debiasing is the reduction of bias, particularly with respect to judgment and decision making.” The basic idea is that we can change things to help people and organizations make better decisions.

What can we change? According to A User’s Guide To Debiasing, we can do two things:

  1. Modify the decision maker – we do this by “providing some combination of knowledge and tools to help [people] overcome their limitations and dispositions.”
  2. Modify the environment – we do this by “alter[ing] the setting where judgments are made in a way that … encourages better strategies.”

I’ve been using a Type 1 approach. I’ve aimed at modifying the decision maker by providing information about the source of biases and describing how they skew our perception of reality. We often aren’t aware of the nature of our own perception and judgment. I liken my approach to making the fish aware of the water they’re swimming in. (To review some of my articles in this domain, click here, here, here, and here).

What does a Type 2 approach look like? How do we modify the environment? The general domain is called choice architecture. The idea is that we change the process by which the decision is made. The book Nudge by Richard Thaler and Cass Sunstein is often cited as an exemplar of this type of work. (My article on using a courtroom process to make corporate decisions fits in the same vein).

How important is debiasing in the corporate world? In 2013, McKinsey & Company surveyed 770 corporate board members to determine the characteristics of a high-performing board. The “biggest aspiration” of high-impact boards was “reducing decision biases”. As McKinsey notes, “At the highest level, boards look inward and aspire to more ‘meta’ practices—deliberating about their own processes, for example—to remove biases from decisions.”

More recently, McKinsey has written about the business opportunity in debiasing. They note, for instance, that businesses are least likely to question their core processes. Indeed, they may not even recognize that they are making decisions. In my terminology, they’re not aware of the water they’re swimming in. As a result, McKinsey concludes “…most of the potential bottom-line impact from debiasing remains unaddressed.”

What to do? Being a teacher, I would naturally recommend training and education programs as a first step. McKinsey agrees … but only up to a point. McKinsey notes that many decision biases are so deeply embedded that managers don’t recognize them. They swim blithely along without recognizing how the water shapes and distorts their perception. Or, perhaps more frequently, they conclude, “I’m OK. You’re Biased.”

Precisely because such biases frequently operate in System 1 as opposed to System 2, McKinsey suggests a program consisting of both training and structural changes. In other words, we need to modify both the decision maker and the decision environment. I’ll write more about structural changes in the coming weeks. In the meantime, if you’d like a training program, give me a call.

1 2 3 50
My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives