Strategy. Innovation. Brand.

Strategy

1 2 3 20

Managing Agreement: The Abilene Paradox.

I want to be a team player, but….

I used to think it was difficult to manage conflict. Now I wonder if it isn’t more difficult to manage agreement.

A conflicted organization is fairly easy to analyze. The signs are abundant. You can quickly identify the conflicting groups as well as the members of each. You can identify grievances simply by talking with people. You can figure out who is “us” and who is “them”. Solving the problem may prove challenging but, at the very least, you know two things: 1) there is a problem; 2) its general contours are easy to see.

When an organization is in agreement, on the other hand, you may not even know that a problem exists. Everything floats along smoothly. People may not quiver with enthusiasm but no one is throwing furniture or shouting obscenities. Employees work and things get done.

The problem with an organization in agreement is that many participants actually disagree. But the disagreement doesn’t bubble up and out. There are at least two scenarios in which this happens:

  1. The Abilene Paradox – in the original telling, four members of a family in Coleman, Texas drove 53 miles to Abilene in a car without air conditioning in 104-degree heat to have dinner at a crummy diner. After driving 53 miles back, they ‘fessed up: not one of them had wanted to go. Each person thought the others wanted to go. They agreed to be agreeable. (A variant of this is known as the risky shift).

Similar paradoxes arise in organizations all the time. Each employee wants to be seen as a team player. They may have reservations about a decision but — because everyone else agrees or seems to agree — they keep quiet. Perhaps nobody agrees to a given project but they believe that everyone else does. Perhaps nobody wants to work on Project X. Nevertheless, Project X persists. Unlike a conflicted organization, nobody realizes that a problem exists.

  1. Fear – in organizations where failure is not an option, employees work hard to salvage success even from doomed projects. Admitting that a project has failed invites punishment. Employees happily throw good money after bad, hoping to snatch victory from the jaws of defeat. Employees agree that failure must be delayed or hidden.

The second scenario is perhaps more dangerous but less common. A fear-based culture – if left untreated – will eventually corrupt the entire organization. Employees grow afraid of telling the truth. The remedy is easy to discern but hard to execute: the organization needs to replace executive management and create a new culture.

The Abilene paradox is perhaps less dangerous but far more common. Any organization that strives to “play as a team” or “hire team players” is at risk. Employees learn to go along with the team, even if they believe the team is wrong.

What can be done to overcome the Abilene paradox in an organization? Rosabeth Moss Kanter points out that there are two parts to the problem. First, employees make inaccurate assumptions about what others believe. Second, even though they disagree, they don’t feel comfortable speaking up. A good manager can work on both sides of the problem. Kanter suggests the following:

  • Debates – include an active debate in all decision processes. Choose sides and formally air out the pros and cons of a situation. (I’ve suggested something similar in the decision by trial process).
  • Assign devil’s advocates and give them the time and resources to develop a real position.
  • Encourage organizational graffiti – I think of this as the electronic equivalent of Hyde Park’s Speaker’s Corner – a place where people can get things off their chests.
  • Make confronters into heroes — even if you disagree with the message, reward the process.
  • Develop a culture of pride – build collective self-esteem, not just individual self-esteem. We’re proud of what we have, including the right (or even the obligation) to disagree.

The activities needed to ward off the Abilene paradox are not draconian. Indeed, they’re fairly easy to implement. But you can only implement them if you realize that a problem exists. That’s the hard part.

Debiasing and Corporate Performance

Loss aversion bias? Or maybe I’m just satisficing?

Over the past several years, I’ve written several articles about cognitive biases. I hope I have alerted my readers to the causes and consequences of these biases. My general approach is simple: forewarned is forearmed.

I didn’t realize that I was participating in a more general trend known as debiasing. As Wikipedia notes, “Debiasing is the reduction of bias, particularly with respect to judgment and decision making.” The basic idea is that we can change things to help people and organizations make better decisions.

What can we change? According to A User’s Guide To Debiasing, we can do two things:

  1. Modify the decision maker – we do this by “providing some combination of knowledge and tools to help [people] overcome their limitations and dispositions.”
  2. Modify the environment – we do this by “alter[ing] the setting where judgments are made in a way that … encourages better strategies.”

I’ve been using a Type 1 approach. I’ve aimed at modifying the decision maker by providing information about the source of biases and describing how they skew our perception of reality. We often aren’t aware of the nature of our own perception and judgment. I liken my approach to making the fish aware of the water they’re swimming in. (To review some of my articles in this domain, click here, here, here, and here).

What does a Type 2 approach look like? How do we modify the environment? The general domain is called choice architecture. The idea is that we change the process by which the decision is made. The book Nudge by Richard Thaler and Cass Sunstein is often cited as an exemplar of this type of work. (My article on using a courtroom process to make corporate decisions fits in the same vein).

How important is debiasing in the corporate world? In 2013, McKinsey & Company surveyed 770 corporate board members to determine the characteristics of a high-performing board. The “biggest aspiration” of high-impact boards was “reducing decision biases”. As McKinsey notes, “At the highest level, boards look inward and aspire to more ‘meta’ practices—deliberating about their own processes, for example—to remove biases from decisions.”

More recently, McKinsey has written about the business opportunity in debiasing. They note, for instance, that businesses are least likely to question their core processes. Indeed, they may not even recognize that they are making decisions. In my terminology, they’re not aware of the water they’re swimming in. As a result, McKinsey concludes “…most of the potential bottom-line impact from debiasing remains unaddressed.”

What to do? Being a teacher, I would naturally recommend training and education programs as a first step. McKinsey agrees … but only up to a point. McKinsey notes that many decision biases are so deeply embedded that managers don’t recognize them. They swim blithely along without recognizing how the water shapes and distorts their perception. Or, perhaps more frequently, they conclude, “I’m OK. You’re Biased.”

Precisely because such biases frequently operate in System 1 as opposed to System 2, McKinsey suggests a program consisting of both training and structural changes. In other words, we need to modify both the decision maker and the decision environment. I’ll write more about structural changes in the coming weeks. In the meantime, if you’d like a training program, give me a call.

Failure Is An Option

Houston, we have a problem … but we’d rather not talk about it.

The movie Apollo 13 came out in 1995 and popularized the phrase “Failure is not an option”. The flight director, Gene Kranz (played by Ed Harris), repeated the phrase to motivate engineers to find a solution immediately. It worked.

I bet that Kranz’s signature phrase caused more failures in American organizations than any other single sentence in business history. I know it caused myriad failures – and a culture of fear – in my company.

Our CEO loved to spout phrases like “Failure is not an option” and “We will not accept failure here.” It made him feel good. He seemed to believe that repeating the mantra could banish failure forever. It became a magical incantation.

Of course, we continued to have failures in our company. We built complicated software and we occasionally ran off the rails. What did we do when a failure occurred? We buried it. Better a burial than a “public hanging”.

The CEO’s mantra created a perverse incentive. He wanted to eliminate failures. We wanted to keep our jobs. To keep our jobs, we had to bury our failures. Because we buried them, we never fixed the processes that led to the failures in the first place. Our executives could easily conclude that our processes were just fine. After all, we didn’t have any failures, did we?

As we’ve learned elsewhere, design thinking is all about improving something and then improving it again and then again and again. How can we design a corporate culture that continuously improves?

One answer is the concept of the just culture. A just culture acknowledges that failures occur. Many failures result from systemic or process problems rather than from individual negligence. It’s not the person; it’s the system. A just culture aims to improve the system to 1) prevent failure wherever possible or; 2) to ameliorate failures when they do occur. In a sense, it’s a culture designed to improve itself.

According to Barbara Brunt, “A just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control.” Rather than hiding system failures, a just culture encourages employees to report them. Designers can then improve the systems and processes. As the system improves, the culture also improves. Employees realize that reporting failures leads to good outcomes, not bad ones. It’s a virtuous circle.

The concept of a just culture is not unlike appreciative inquiry. Managers recognize that most processes work pretty well. They appreciate the successes. Failure is an exception – it’s a cause for action and design thinking as opposed to retribution. We continue to appreciate the employee as we redesign the process.

The just culture concept has established a firm beachhead among hospitals in the United States. That makes sense because hospital mistakes can be especially tragic. But I wonder if the concept shouldn’t spread to a much wider swath of companies and agencies. I can certainly think of a number of software companies that could improve their quality by improving their culture. Ultimately, I suspect that every organization could benefit by adapting a simple principle of just culture: if you want to improve your outcomes, recruit your employees to help you.

I’ve learned a bit about just culture because one of my former colleagues, Kim Ross, recently joined Outcome Engenuity, the leading consulting agency in the field of just culture. You can read more about them here. You can learn more about hospital use of just culture by clicking here, here, and here.

Innovating The Innovations

It's a mashup!

It’s a mashup!

Mashup thinking is an excellent way to develop new ideas and products. Rather than thinking outside the box (always difficult), you select ideas from multiple boxes and mash them together. Sometimes, nothing special happens. Sometimes, you get a genius idea.

Let’s mash up self-driving vehicles and drones to see what we get. First, let’s look at the current paradigms:

Self-driving vehicles (SDVs) include cars and trucks equipped with special sensors that can use existing public roadways to navigate autonomously to a given destination. The vehicles navigate a two-dimensional surface and should be able to get humans or packages from Point A to Point B more safely than human-driven vehicles. Individuals may not buy SDVs the way we have traditionally bought cars and trucks. We may simply call them when needed. Though the technology is rapidly improving, the legal and ethical systems still require a great deal of work.

Drones navigate three-dimensional space and are not autonomous. Rather, specially trained pilots fly them remotely. (They are often referred to as Remotely Piloted Aircraft or RPAs). They military uses drones for several missions, including surveillance, intelligence gathering, and to attack ground targets. To date, we haven’t heard of drones attacking airborne targets, but it’s certainly possible. Increasingly, businesses are considering drones for package delivery. The general paradigm is that a small drone will pick up a package from a warehouse (perhaps an airborne warehouse) and deliver it to a home or office or to troops in the field.

So, what do we get if we mash up self-driving vehicles and drones?

The first idea that comes to mind is an autonomous drone. Navigating 3D space is actually simpler than navigating 2D space – you can fly over or under an approaching object. (As a result, train traffic controllers have a more difficult job than air traffic controllers). Why would we want self-flying drones? Conceivably they would be more efficient, less costly, and safer than the human-driven equivalents. They also have a lot more space to operate in and don’t require a lot of asphalt.

We could also change the paradigm for what drones carry. Today, we think of them as carrying packages. Why not people, just like SDVs? It shouldn’t be terribly hard to design a drone that could comfortably carry a couple from their house to the theater and back. We’ll be able to whip out our smart phones, call Uber or Lyft, and have a drone pick us up. (I hope Lyft has trademarked the term Air Lyft).

What else? How about combining self-flying drones with self-driving vehicles? Today’s paradigm for drone deliveries is that an individual drone goes to a warehouse, picks up a package, and delivers it to an individual address. Even if the warehouse is airborne and mobile, that’s horribly inefficient. Instead, let’s try this: a self-driving truck picks up hundreds of packages to be delivered along a given route. The truck also has dozens of drones on it. As the truck passes near an address, a drone picks up the right package, and flies it to the doorstep. We could only do this, of course, if drones are autonomous. The task is too complicated for a human operator.

I could go on … but let’s also investigate the knock-on effects. If what I’ve described comes to pass, what else will happen? Here are some challenges that will probably come up:

  • If drones can carry people as well as packages, we’ll need fewer roadways. What will we do with obsolete roads? We’ll probably need fewer airports, too. What will we do with them?
  • If people no longer buy personal vehicles but call transportation on demand:
    • We’ll need far fewer parking lots. How can cities use the space to revitalize themselves?
    • Automobile companies will implode. How do we retrain automobile executives and workers?
    • We’ll burn far less fossil fuel. This will be good for the environment but bad for, say, oil companies and oil workers. How do we share the burden?
  • If combined vehicles – drones and SDVs – deliver packages, millions of warehouse workers and drivers will lose their jobs. Again, how do we share the burden?
  • If autonomous drones can attack airborne targets, do we really need expensive, human-piloted fighter jets?

These are intriguing predictions as well as troublesome challenges. But the thought process for generating these ideas is quite simple – you simply mash up good ideas from multiple boxes. You, too, can predict the future.

Us Versus Them

"The school bus broke down!"

“The school bus broke down!”

How easy is it for an us-versus-them situation to arise? How often do we define our group as different from – and therefore better than – another group? The short answers: It’s surprisingly easy and it happens all the time.

In my professional life, I often saw us-versus-them attitudes arise between headquarters and the field. Staffers at head-quarters thought they were in a good position to direct field activities. People in the field thought the folks at headquarters just didn’t have a clue about the real world.

Headquarters and the field are typically separated by many factors, including geography, planning horizons, rank, age, academic experience, and tenure. Each side has plenty of reasons to feel different from – and superior to – the other side. But how many reasons does it take to generate us-versus-them attitudes?

In the early 1970s, the social psychologist Henri Tajfel tried to work out the minimum requirements for one group to discriminate against another group. It turns out that it doesn’t take much. People who are separated into groups based on their shirt color develop us-versus-them attitudes. People who are separated based on the flip of a coin do the same. Tajfel’s minimal group paradigm is quite simple: The minimum requirement to create us-versus-them attitudes is the existence of two groups.

Us-versus-them attitudes are completely natural. They arise without provocation. There’s no conspiracy. All we need is two groups. I sometimes hear managers say, “Let’s not develop us-versus-them attitudes here.” But that’s completely unnatural. Something about our human nature requires us to develop such attitudes when two groups exist. It can’t not happen.

We can’t avoid us-versus-them attitudes but we can dissolve them. We can’t stop them from starting but we can stop them once they have started.

The pioneering research on this was the Robbers Cave Experiment conducted in 1954. Muzafer and Carolyn Sherif, professors at the University of Oklahoma, selected two dozen 12-year-old boys from suburban Oklahoma City and sent them off to summer camp at Robbers Cave State Park. The boys were quite similar in terms of ethnicity and socioeconomic status. None of the boys knew each other at the beginning of the experiment.

The boys were randomly divided into two groups and housed in different areas of the campground. Initially, the groups didn’t know of each other’s existence. They discovered each other only when they began to compete for camp resources, like playing fields or dining halls. Once they discovered each other, they quickly named their groups: Rattlers and Eagles.

So far, the boys’ behavior was entirely predictable. The research question was: How do you change such behavior to reduce us-versus-them attitudes?

The researchers first measured the impact of mere contact. The researchers thought that by getting the boys to mingle – in dining halls or on camp buses, for example – they could overcome negative attitudes and build relationships. The finding: mere contact did not change attitudes for the better. Indeed, when contact was coupled with competition for resources, it increased friction rather than reducing it.

The researchers then moved on to superordinate goals. The two groups had to cooperate to achieve a goal that neither group could achieve on its own. For example, the researchers arranged for the camp bus to “break down”. They also arranged for the water supply to go dry. Rattlers and Eagles had to work together to fix the problems. The finding: cooperation on a larger goal reduced friction and the two groups began to integrate. Rattlers and Eagles actually started to like each other.

The research that the Sherifs started has now grown into a domain known as realistic conflict theory or RCT. The theory suggests that groups will develop resentful attitudes towards other groups, especially when they compete for resources in a zero-sum situation. According to Wikipedia, RCT suggests that “…positive relations can only be restored if superordinate goals are in place.”

The moral of the story is simple: you can’t prevent us-versus-them attitudes but you can fix them. Just find a problem that requires cooperation and collaboration.

 

 

1 2 3 20
My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives