Strategy. Innovation. Brand.

Leadership

1 2 3 5

How To Save Democracy

Back off! I know rhetoric.

Most historians would agree that the arts and sciences of persuasion – also known as rhetoric – originated with the Greeks approximately 2,500 years ago. Why there? Why not the Egyptians or the Phoenicians or the Chinese? And why then? What was going on in Greece that necessitated new rules for communication?

The simple answer is a single word: democracy. The Greeks invented democracy. For the first time in the history of the world, people needed to persuade each other without force or violence. So the Greeks had to invent rhetoric.

Prior to democracy, people didn’t need to disagree in any organized way. We simply followed the leader. We agreed with the monarch. We converted to the emperor’s religion. We believed in the gods that the priests proclaimed. If we disagreed, we were ignored or banished or killed. Simple enough.

With the advent of democracy, public life grew messy. We could no longer say, “You will believe this because the emperor believes it.” Rather, we had to persuade. The basic argument was simple, “You should believe this because it provides advantages.” We needed rules and pointers for making such arguments successfully. Socrates and Aristotle (and many others) rose to the challenge and invented rhetoric.

Democracy, then, is about disagreement. We recognize that we will disagree. Indeed, we recognize that we should disagree. The trick is to disagree without anger or violence. We seek to persuade, not to subdue. In fact, here’s a simple test of how democratic a society is:

What proportion of the population agrees with the following statement?

“Of course, we’re going to disagree. But we’ve agreed to resolve our disagreements without violence.”

It seems like a simple test. But we overlook it at our peril. Societies that can’t pass this test (and many can’t) are forever doomed to civil strife, violence, disruption, and dysfunction.

The chief function of rhetoric is to teach us to argue without anger. The fundamental questions of rhetoric pervade both our public and private lives. How can I persuade someone to see a different perspective? How can I persuade someone to agree with me? How can we forge a common vision?

Up through the 19th century, educated people were well versed in rhetoric. All institutions of higher education taught the trivium, which consisted of logic, grammar, and rhetoric. Having mastered the trivium, students could progress to the quadrivium – arithmetic, geometry, music, and astronomy. The trivium provided the platform upon which everything else rested.

In the 20th century, we saw the rise of mass communications, government sponsored propaganda, widespread public relations campaigns, and social media. Ironically, we also decided that we no longer needed to teach rhetoric. We considered it manipulative. To insult an idea, we called it “empty rhetoric”.

But rhetoric also helps us defend ourselves against mass manipulation, which flourished in the 20th century and continues to flourish today. (Indeed, in the 21st century, we seem to want to hone it to an even finer point). We sacrificed our defenses at the very moment that manipulation surged forward. Having no defenses, we became angrier and less tolerant.

What to do? The first step is to revive the arts of persuasion and critical thinking. Essentially, we need to revive the trivium. By doing so, we’ll be better able to argue without anger and to withstand the effects of mass manipulation. Reviving rhetoric won’t solve the world’s problems. But it will give us a tool to resolve problems – without violence and without anger.

Managing Agreement: The Abilene Paradox.

I want to be a team player, but….

I used to think it was difficult to manage conflict. Now I wonder if it isn’t more difficult to manage agreement.

A conflicted organization is fairly easy to analyze. The signs are abundant. You can quickly identify the conflicting groups as well as the members of each. You can identify grievances simply by talking with people. You can figure out who is “us” and who is “them”. Solving the problem may prove challenging but, at the very least, you know two things: 1) there is a problem; 2) its general contours are easy to see.

When an organization is in agreement, on the other hand, you may not even know that a problem exists. Everything floats along smoothly. People may not quiver with enthusiasm but no one is throwing furniture or shouting obscenities. Employees work and things get done.

The problem with an organization in agreement is that many participants actually disagree. But the disagreement doesn’t bubble up and out. There are at least two scenarios in which this happens:

  1. The Abilene Paradox – in the original telling, four members of a family in Coleman, Texas drove 53 miles to Abilene in a car without air conditioning in 104-degree heat to have dinner at a crummy diner. After driving 53 miles back, they ‘fessed up: not one of them had wanted to go. Each person thought the others wanted to go. They agreed to be agreeable. (A variant of this is known as the risky shift).

Similar paradoxes arise in organizations all the time. Each employee wants to be seen as a team player. They may have reservations about a decision but — because everyone else agrees or seems to agree — they keep quiet. Perhaps nobody agrees to a given project but they believe that everyone else does. Perhaps nobody wants to work on Project X. Nevertheless, Project X persists. Unlike a conflicted organization, nobody realizes that a problem exists.

  1. Fear – in organizations where failure is not an option, employees work hard to salvage success even from doomed projects. Admitting that a project has failed invites punishment. Employees happily throw good money after bad, hoping to snatch victory from the jaws of defeat. Employees agree that failure must be delayed or hidden.

The second scenario is perhaps more dangerous but less common. A fear-based culture – if left untreated – will eventually corrupt the entire organization. Employees grow afraid of telling the truth. The remedy is easy to discern but hard to execute: the organization needs to replace executive management and create a new culture.

The Abilene paradox is perhaps less dangerous but far more common. Any organization that strives to “play as a team” or “hire team players” is at risk. Employees learn to go along with the team, even if they believe the team is wrong.

What can be done to overcome the Abilene paradox in an organization? Rosabeth Moss Kanter points out that there are two parts to the problem. First, employees make inaccurate assumptions about what others believe. Second, even though they disagree, they don’t feel comfortable speaking up. A good manager can work on both sides of the problem. Kanter suggests the following:

  • Debates – include an active debate in all decision processes. Choose sides and formally air out the pros and cons of a situation. (I’ve suggested something similar in the decision by trial process).
  • Assign devil’s advocates and give them the time and resources to develop a real position.
  • Encourage organizational graffiti – I think of this as the electronic equivalent of Hyde Park’s Speaker’s Corner – a place where people can get things off their chests.
  • Make confronters into heroes — even if you disagree with the message, reward the process.
  • Develop a culture of pride – build collective self-esteem, not just individual self-esteem. We’re proud of what we have, including the right (or even the obligation) to disagree.

The activities needed to ward off the Abilene paradox are not draconian. Indeed, they’re fairly easy to implement. But you can only implement them if you realize that a problem exists. That’s the hard part.

Debiasing and Corporate Performance

Loss aversion bias? Or maybe I’m just satisficing?

Over the past several years, I’ve written several articles about cognitive biases. I hope I have alerted my readers to the causes and consequences of these biases. My general approach is simple: forewarned is forearmed.

I didn’t realize that I was participating in a more general trend known as debiasing. As Wikipedia notes, “Debiasing is the reduction of bias, particularly with respect to judgment and decision making.” The basic idea is that we can change things to help people and organizations make better decisions.

What can we change? According to A User’s Guide To Debiasing, we can do two things:

  1. Modify the decision maker – we do this by “providing some combination of knowledge and tools to help [people] overcome their limitations and dispositions.”
  2. Modify the environment – we do this by “alter[ing] the setting where judgments are made in a way that … encourages better strategies.”

I’ve been using a Type 1 approach. I’ve aimed at modifying the decision maker by providing information about the source of biases and describing how they skew our perception of reality. We often aren’t aware of the nature of our own perception and judgment. I liken my approach to making the fish aware of the water they’re swimming in. (To review some of my articles in this domain, click here, here, here, and here).

What does a Type 2 approach look like? How do we modify the environment? The general domain is called choice architecture. The idea is that we change the process by which the decision is made. The book Nudge by Richard Thaler and Cass Sunstein is often cited as an exemplar of this type of work. (My article on using a courtroom process to make corporate decisions fits in the same vein).

How important is debiasing in the corporate world? In 2013, McKinsey & Company surveyed 770 corporate board members to determine the characteristics of a high-performing board. The “biggest aspiration” of high-impact boards was “reducing decision biases”. As McKinsey notes, “At the highest level, boards look inward and aspire to more ‘meta’ practices—deliberating about their own processes, for example—to remove biases from decisions.”

More recently, McKinsey has written about the business opportunity in debiasing. They note, for instance, that businesses are least likely to question their core processes. Indeed, they may not even recognize that they are making decisions. In my terminology, they’re not aware of the water they’re swimming in. As a result, McKinsey concludes “…most of the potential bottom-line impact from debiasing remains unaddressed.”

What to do? Being a teacher, I would naturally recommend training and education programs as a first step. McKinsey agrees … but only up to a point. McKinsey notes that many decision biases are so deeply embedded that managers don’t recognize them. They swim blithely along without recognizing how the water shapes and distorts their perception. Or, perhaps more frequently, they conclude, “I’m OK. You’re Biased.”

Precisely because such biases frequently operate in System 1 as opposed to System 2, McKinsey suggests a program consisting of both training and structural changes. In other words, we need to modify both the decision maker and the decision environment. I’ll write more about structural changes in the coming weeks. In the meantime, if you’d like a training program, give me a call.

Failure Is An Option

Houston, we have a problem … but we’d rather not talk about it.

The movie Apollo 13 came out in 1995 and popularized the phrase “Failure is not an option”. The flight director, Gene Kranz (played by Ed Harris), repeated the phrase to motivate engineers to find a solution immediately. It worked.

I bet that Kranz’s signature phrase caused more failures in American organizations than any other single sentence in business history. I know it caused myriad failures – and a culture of fear – in my company.

Our CEO loved to spout phrases like “Failure is not an option” and “We will not accept failure here.” It made him feel good. He seemed to believe that repeating the mantra could banish failure forever. It became a magical incantation.

Of course, we continued to have failures in our company. We built complicated software and we occasionally ran off the rails. What did we do when a failure occurred? We buried it. Better a burial than a “public hanging”.

The CEO’s mantra created a perverse incentive. He wanted to eliminate failures. We wanted to keep our jobs. To keep our jobs, we had to bury our failures. Because we buried them, we never fixed the processes that led to the failures in the first place. Our executives could easily conclude that our processes were just fine. After all, we didn’t have any failures, did we?

As we’ve learned elsewhere, design thinking is all about improving something and then improving it again and then again and again. How can we design a corporate culture that continuously improves?

One answer is the concept of the just culture. A just culture acknowledges that failures occur. Many failures result from systemic or process problems rather than from individual negligence. It’s not the person; it’s the system. A just culture aims to improve the system to 1) prevent failure wherever possible or; 2) to ameliorate failures when they do occur. In a sense, it’s a culture designed to improve itself.

According to Barbara Brunt, “A just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control.” Rather than hiding system failures, a just culture encourages employees to report them. Designers can then improve the systems and processes. As the system improves, the culture also improves. Employees realize that reporting failures leads to good outcomes, not bad ones. It’s a virtuous circle.

The concept of a just culture is not unlike appreciative inquiry. Managers recognize that most processes work pretty well. They appreciate the successes. Failure is an exception – it’s a cause for action and design thinking as opposed to retribution. We continue to appreciate the employee as we redesign the process.

The just culture concept has established a firm beachhead among hospitals in the United States. That makes sense because hospital mistakes can be especially tragic. But I wonder if the concept shouldn’t spread to a much wider swath of companies and agencies. I can certainly think of a number of software companies that could improve their quality by improving their culture. Ultimately, I suspect that every organization could benefit by adapting a simple principle of just culture: if you want to improve your outcomes, recruit your employees to help you.

I’ve learned a bit about just culture because one of my former colleagues, Kim Ross, recently joined Outcome Engenuity, the leading consulting agency in the field of just culture. You can read more about them here. You can learn more about hospital use of just culture by clicking here, here, and here.

Too Much Rhetoric? Or Not Enough?

When will we learn to argue without anger?

In the Western world, the art of persuasion (aka rhetoric), appeared first in ancient Athens. We might well ask, why did it emerge there and then, as opposed to another place and another time?

In his book, Words Like Loaded Pistols, Sam Leith argues that rhetoric blossomed first in Greece because that’s where democracy emerged. Prior to that, we didn’t need to argue or persuade or create ideas — at least not in the public sphere. We just accepted as true whatever the monarch said was true. There was no point in arguing. The monarch wasn’t going to budge.

Because Greeks allowed citizens from different walks of life to speak in the public forum, they were the first people who needed to manage ideas and arguments. In response, they developed the key concepts of rhetoric. They also established the idea that rhetoric was an essential element of good leadership. A leader needed to manage the passions of the moment by speaking logically, clearly, and persuasively.

Through the 19th century, well-educated people were thoroughly schooled in rhetoric as well as the related disciplines of logic and grammar. These were known as the trivium and they helped us manage public ideas. Debates, governed by the rules of rhetoric, helped us create new ideas. Thesis led to antithesis led to synthesis. We considered the trivium to be an essential foundation for good leadership. Leaders have to create ideas, explain ideas, and defend ideas. The trivium provided the tools.

Then in the 20th century, we decided that we didn’t need to teach these skills anymore. Leith argues that we came to see history as an impersonal, overwhelming, uncontrollable force in its own right. Why argue about it if we can’t control it? Courses in rhetoric — and leadership — withered away.

It’s interesting to look at rhetoric as an essential part of democracy. It’s not something to be scorned. It’s something to be promoted. I wonder if some of our partisan anger and divisiveness doesn’t result from the lack of rhetoric in our society. We don’t have too much rhetoric. Rather, we have too little. We have forgotten how to argue without anger.

I’m happy to see that rhetoric and persuasion classes are making a comeback in academia today.  Similarly, courses in leadership seem to be flowering again. Perhaps we can look forward to using disagreements to create new ideas rather than an anvil to destroy them.

1 2 3 5
My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives