Remember Tetris? Originally released in Russia in 1984, it became the top selling video game of all time, with more than 495 million copies in circulation. It’s a simple game – different shaped tiles fall from the top of the screen and you arrange them in patterns at the bottom of the screen.
It seems like a simple-minded time killer. It’s not rocket science. But it turns out to have some interesting effects on our brains. Here are two recent studies.
Tetris and Intrusive Memories
Lets say you’re in a car accident. The trauma, whether physical or psychological, may result in intrusive memories, which are hallmarks of post-traumatic stress disorder (PTSD).
When an intrusive memory occurs, the survivor essentially relives the traumatic event. It seems plausible that reducing intrusive memories would help survivors manage stress and maintain metal health. So, how might one reduce intrusive memories or prevent their formation in the first place? How about Tetris?
This was the hypothesis of a study recently published in Molecular Psychiatry. Researchers recruited 71 subjects in an emergency room at a hospital in Oxford, England. The subjects had recently (less then six hours previously) experienced an automobile accident and were randomly assigned to one of two groups:
Researchers contacted the subjects one week and one month after the accident. The result? Subjects who played Tetris “recorded significantly fewer intrusive memories” and “reported less distress from intrusion symptoms.”
Tetris and Your Cortex
What about people who aren’t involved in a traumatic event? Does Tetris have an impact on them? This was the question asked several years ago in a study conducted by researchers at the University of New Mexico.
The researchers recruited 26 girls, aged 12 to 15 and randomly assigned them to the experimental group or the control group. The researchers taught the girls in the experimental group to play Tetris and coached them to play regularly. The girls in the control group were coached not to play Tetris. The researchers followed the two groups for three months. During that time the girls in the Tetris group played the game approximately 90 minutes per week.
At the end of three months, the Tetris-playing girls had a “significantly thicker cortex” than the non-Tetris-playing girls. The cortex is gray matter and is generally associated with higher-level brain functions such as memory, attention, and planning.
Does this mean that playing Tetris will make your smarter or your brain more efficient? Probably not. Playing Tetris probably only makes you better at playing Tetris. But it’s more evidence that the brain is plastic; you can change it by how you behave. It’s not surprising in a group of youngsters whose brains are not yet mature. It might be very telling to replicate the experiment with a group to oldsters to see just how plastic their brains are. My hypothesis: there’s still a lot of plasticity left.
So Tetris can teach us about brain plasticity and help suppress intrusive memories. Not bad for a free video game. I wonder what else it can do.
I used to think it was difficult to manage conflict. Now I wonder if it isn’t more difficult to manage agreement.
A conflicted organization is fairly easy to analyze. The signs are abundant. You can quickly identify the conflicting groups as well as the members of each. You can identify grievances simply by talking with people. You can figure out who is “us” and who is “them”. Solving the problem may prove challenging but, at the very least, you know two things: 1) there is a problem; 2) its general contours are easy to see.
When an organization is in agreement, on the other hand, you may not even know that a problem exists. Everything floats along smoothly. People may not quiver with enthusiasm but no one is throwing furniture or shouting obscenities. Employees work and things get done.
The problem with an organization in agreement is that many participants actually disagree. But the disagreement doesn’t bubble up and out. There are at least two scenarios in which this happens:
Similar paradoxes arise in organizations all the time. Each employee wants to be seen as a team player. They may have reservations about a decision but — because everyone else agrees or seems to agree — they keep quiet. Perhaps nobody agrees to a given project but they believe that everyone else does. Perhaps nobody wants to work on Project X. Nevertheless, Project X persists. Unlike a conflicted organization, nobody realizes that a problem exists.
The second scenario is perhaps more dangerous but less common. A fear-based culture – if left untreated – will eventually corrupt the entire organization. Employees grow afraid of telling the truth. The remedy is easy to discern but hard to execute: the organization needs to replace executive management and create a new culture.
The Abilene paradox is perhaps less dangerous but far more common. Any organization that strives to “play as a team” or “hire team players” is at risk. Employees learn to go along with the team, even if they believe the team is wrong.
What can be done to overcome the Abilene paradox in an organization? Rosabeth Moss Kanter points out that there are two parts to the problem. First, employees make inaccurate assumptions about what others believe. Second, even though they disagree, they don’t feel comfortable speaking up. A good manager can work on both sides of the problem. Kanter suggests the following:
The activities needed to ward off the Abilene paradox are not draconian. Indeed, they’re fairly easy to implement. But you can only implement them if you realize that a problem exists. That’s the hard part.
Over the past several years, I’ve written several articles about cognitive biases. I hope I have alerted my readers to the causes and consequences of these biases. My general approach is simple: forewarned is forearmed.
I didn’t realize that I was participating in a more general trend known as debiasing. As Wikipedia notes, “Debiasing is the reduction of bias, particularly with respect to judgment and decision making.” The basic idea is that we can change things to help people and organizations make better decisions.
What can we change? According to A User’s Guide To Debiasing, we can do two things:
I’ve been using a Type 1 approach. I’ve aimed at modifying the decision maker by providing information about the source of biases and describing how they skew our perception of reality. We often aren’t aware of the nature of our own perception and judgment. I liken my approach to making the fish aware of the water they’re swimming in. (To review some of my articles in this domain, click here, here, here, and here).
What does a Type 2 approach look like? How do we modify the environment? The general domain is called choice architecture. The idea is that we change the process by which the decision is made. The book Nudge by Richard Thaler and Cass Sunstein is often cited as an exemplar of this type of work. (My article on using a courtroom process to make corporate decisions fits in the same vein).
How important is debiasing in the corporate world? In 2013, McKinsey & Company surveyed 770 corporate board members to determine the characteristics of a high-performing board. The “biggest aspiration” of high-impact boards was “reducing decision biases”. As McKinsey notes, “At the highest level, boards look inward and aspire to more ‘meta’ practices—deliberating about their own processes, for example—to remove biases from decisions.”
More recently, McKinsey has written about the business opportunity in debiasing. They note, for instance, that businesses are least likely to question their core processes. Indeed, they may not even recognize that they are making decisions. In my terminology, they’re not aware of the water they’re swimming in. As a result, McKinsey concludes “…most of the potential bottom-line impact from debiasing remains unaddressed.”
What to do? Being a teacher, I would naturally recommend training and education programs as a first step. McKinsey agrees … but only up to a point. McKinsey notes that many decision biases are so deeply embedded that managers don’t recognize them. They swim blithely along without recognizing how the water shapes and distorts their perception. Or, perhaps more frequently, they conclude, “I’m OK. You’re Biased.”
Precisely because such biases frequently operate in System 1 as opposed to System 2, McKinsey suggests a program consisting of both training and structural changes. In other words, we need to modify both the decision maker and the decision environment. I’ll write more about structural changes in the coming weeks. In the meantime, if you’d like a training program, give me a call.
The movie Apollo 13 came out in 1995 and popularized the phrase “Failure is not an option”. The flight director, Gene Kranz (played by Ed Harris), repeated the phrase to motivate engineers to find a solution immediately. It worked.
I bet that Kranz’s signature phrase caused more failures in American organizations than any other single sentence in business history. I know it caused myriad failures – and a culture of fear – in my company.
Our CEO loved to spout phrases like “Failure is not an option” and “We will not accept failure here.” It made him feel good. He seemed to believe that repeating the mantra could banish failure forever. It became a magical incantation.
Of course, we continued to have failures in our company. We built complicated software and we occasionally ran off the rails. What did we do when a failure occurred? We buried it. Better a burial than a “public hanging”.
The CEO’s mantra created a perverse incentive. He wanted to eliminate failures. We wanted to keep our jobs. To keep our jobs, we had to bury our failures. Because we buried them, we never fixed the processes that led to the failures in the first place. Our executives could easily conclude that our processes were just fine. After all, we didn’t have any failures, did we?
As we’ve learned elsewhere, design thinking is all about improving something and then improving it again and then again and again. How can we design a corporate culture that continuously improves?
One answer is the concept of the just culture. A just culture acknowledges that failures occur. Many failures result from systemic or process problems rather than from individual negligence. It’s not the person; it’s the system. A just culture aims to improve the system to 1) prevent failure wherever possible or; 2) to ameliorate failures when they do occur. In a sense, it’s a culture designed to improve itself.
According to Barbara Brunt, “A just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control.” Rather than hiding system failures, a just culture encourages employees to report them. Designers can then improve the systems and processes. As the system improves, the culture also improves. Employees realize that reporting failures leads to good outcomes, not bad ones. It’s a virtuous circle.
The concept of a just culture is not unlike appreciative inquiry. Managers recognize that most processes work pretty well. They appreciate the successes. Failure is an exception – it’s a cause for action and design thinking as opposed to retribution. We continue to appreciate the employee as we redesign the process.
The just culture concept has established a firm beachhead among hospitals in the United States. That makes sense because hospital mistakes can be especially tragic. But I wonder if the concept shouldn’t spread to a much wider swath of companies and agencies. I can certainly think of a number of software companies that could improve their quality by improving their culture. Ultimately, I suspect that every organization could benefit by adapting a simple principle of just culture: if you want to improve your outcomes, recruit your employees to help you.
I’ve learned a bit about just culture because one of my former colleagues, Kim Ross, recently joined Outcome Engenuity, the leading consulting agency in the field of just culture. You can read more about them here. You can learn more about hospital use of just culture by clicking here, here, and here.
Most social sciences have a bad case of physics envy. They covet physics’ certainty, precision, and predictability. That’s certainly the case with rhetoric, the discipline that deals with the art and science of persuasion.
Physics allows us to make precise this-then-that statements. If we do this, then that is certain to happen. Those of us who teach rhetoric would love to have the same certainty. We would love to say, “If we arrange our argument like this, then the audience will certainly agree with us”.
But rhetoric deals with human beings whose behavior is anything but certain. Rhetoric teaches us to argue without anger so that we may find the best choice among multiple options. If we allow the rhetorical process to work, we can often find the best alternative. But we can never guarantee it. Rhetoric reigns in any human endeavor where uncertainty is certain.
So let’s compare physics and rhetoric. What are they good for and how do they complement each other? Further, let’s ask a simple question: which discipline is best for the planet? I’ll focus on deliberative rhetoric, which asks us to make choices about the future. (Two other major forms are demonstrative and forensic rhetoric).
Physics starts from what is true. Rhetoric starts from what we agree to.
Both physics and rhetoric use arguments in which propositions lead to a conclusion. We typically call these syllogisms, with major premises and minor premises. In physics, the major premise is always verifiably true. Here’s an example of a syllogism in physics:
Major premise: Nothing can travel faster than the speed of light
Minor premise: We are travelling in a nuclear-powered space ship.
Conclusion: We are travelling slower than the speed of light.
The point of this somewhat simplistic syllogism is that the major premise is verifiably true. We start from the truth.
Now let’s look at a rhetorical syllogism, which is often referred to as an enthymeme (though the technical definition of an enthymeme is slightly different).
Major premise: Lower taxes are better for our society.
Minor premise: My party will lower taxes (more than the other party)
Conclusion: You should vote for my party.
Note that the major premise (also known as a commonplace) is debatable. But, if your audience agrees with the commonplace, you can proceed step by step to a conclusion that they will also agree with. It’s important to know what the audience believes. Rhetoric start where they are, not where we are.
Physics seeks the truth. Rhetoric seeks the best choice.
Physics makes predictions and tests them to determine if they’re true. Typically, a prediction is either true or not.
Rhetoric, on the other hand, deals with choices. To discover the best choice, we first have to identify all possible choices. We then present evidence (which may or may not be verifiably true) to identify the best choice.
Physics is based on facts. Rhetoric is based on benefits.
Physics has a well-defined set of verifiable facts. Rhetoric, by contrast, depends on benefits, which may or may not accrue to a given set of people.
We state the benefits (often known as the Advantageous) to persuade people to agree with us. We might say, for instance that lower taxes will benefit the middle class. This may well be true but, again, there is no certainty.
Physics cannot live without facts. Rhetoric cannot live solely with facts. We must be able to state benefits and advantages to persuade people to agree with a given proposition.
Physics follows rules of logic. Rhetoric follows rules of agreement.
The rules of logic are formal and specific. It’s easy to tell if we have violated logic. Even a computer can do it.
The rules of agreement are much more open-ended. Credibility is important. Arguments may be emotional and may include enticing benefits and emoluments. Arguments do not have to be strictly logical. We may use various psychological and sociological tools of influence, including consistency or social proof or scarcity. We may do favors for you and influence you to like us. The goal is to gain agreement.
Physics is more like chess. Rhetoric is more like poker.
In chess, we first need to know the rules and the pieces. After that, logic takes over and guides our efforts.
In poker, we need to know the rules and the cards and the people. Thinking logically is important. But reading people is probably more important.
Physics is about the past. Rhetoric is about the future.
Physics explains what happened. Rhetoric probes what will happen. If we can argue without anger, we can consider and evaluate all possible options. We can evaluate benefits and possibilities. We can decide what’s fair and what’s not fair. We can agree on the best possible course of action.
Of course, we can’t prove that we’ve made the best choice. But the process of considering, evaluating, arguing, and influencing, gives us a better chance of success than any other alternative.
The act of reaching an agreement also improves our chances of future success. As Lincoln noted, a house divided against itself cannot stand. The opposite is not necessarily true. A house united in agreement may not stand … but it has a much better chance than a house divided.
So, which discipline is more important to our future: physics or rhetoric? Physics has brought us awe-inspiring insights into the world around us. It has also given us the knowledge to destroy the world. Rhetoric, on the other hand, has taught us to argue without anger, gain agreement, and move people to action. Physics is about knowledge. Rhetoric is about wisdom. Physics could destroy the world. Rhetoric could possibly – just possibly – save it.