Let’s say that you’re arrested for a terrible crime. After a few months in jail, you’re taken to court for a trial. In the courtroom, you meet your lawyer for the first time. The judge selects a jury. You notice that all the jurors are white males and that they all went to the same business school.
As the trial begins the judge tells the jurors – very directly – that she believes you’re guilty. The prosecutor then uses Power Point to present the evidence against you. The presentation consists of over 200 slides.
Your lawyer is not allowed to present any evidence – he can only respond to the prosecutor’s slides. He raises a number of objections but the prosecutor handles each one smoothly. You notice that the prosecutor doesn’t refute the objections but merely brushes them aside. The slides are complicated and hard to follow. You notice that some of the jurors are glassy-eyed. Others are checking their BlackBerries.
When the prosecutor’s presentation concludes, the jurors don’t adjourn to a separate room to discuss the case. Rather, the judge simply asks them, “So, do you agree with me?” Most of the jurors nod their heads and you’re whisked off to jail.
Could that really happen? Let’s hope not in a court of law. But, as Chip and Dan Heath point out in their book, Decisive, that’s exactly how corporations make bazillion dollar decisions. Echoing Paul Nutt’s book, Why Decisions Fail, the Heaths point out that most business decisions really are one sided.
Here’s how it goes. An executive gets an idea that just might be brilliant – or not. Then a process begins to justify the idea and convince top management to support it. Most of the members of top management recognize that a justification process is going on. They don’t really object to it because 1) they don’t have a forum to present their ideas; and 2) they don’t have the resources to develop the other side of the story or to investigate alternatives.
The justification process grinds on. As the Heaths point out, the process usually results in a “whether or not” decision. Executives don’t consider a range of alternatives but simply vote up or down, yes or no.
The process usually includes a top management meeting with a barrage of Power Point slides. I’ve participated in dozens of them. Usually someone in the meeting says, “Well, let me play devil’s advocate for a moment ….” No matter what that person says, the objection is somehow handled and brushed aside. The devil’s advocate may have a good point but he doesn’t have the data to back it up. The result is that the group feels “better” about the decision … “after all, we did consider both sides.”
Paul Nutt writes that business “…decisions fail half of the time. Vast sums are spent without realizing any benefits for the organization.” In other words, we could flip a coin and do just as well – and save a bundle on consulting fees.
We often complain about our judicial system. But the trial by jury – with evenly matched sides presenting evidence – is probably the best system ever developed for discerning the truth when the evidence is murky. In business, the evidence is often murky; we’re trying to predict the future with incomplete data. You want the best decision? Put it on trial.
I find that about half the time when I get to that stage, it’s because something external to me got me there. Another person gets my goat. Someone else screws up and I’m left holding the bag. A mechanical failure delays yet another flight and, no, I can’t get home tonight. As the saying goes, if it’s not one thing, it’s your mother.
If external factors cause about half of my annoyance attacks, where do the other half come from? Well …. from me. How do I know this? Because I keep track of my flaming e-mails. I live a lot of my life online. I process roughly 100 to 150 e-mails per day.
When I’m annoyed, I sometimes send flaming e-mails. It just feels good to send a self-righteous missive excoriating the recipient for innate stupidity. “Were you born stupid or is this a recent development?” About half the time, my analysis is correct (though my tactics are self-defeating). The other half of the time, I ultimately find out that my own stupidity caused the problem. I failed to check a box, or fill in a blank, or submit the paperwork on time and, therefore, it’s my fault. Then I really feel stupid.
So now when I get to the end of my rope, I take a strategic pause. That’s a fancy term that comes from the critical thinking world but it’s a technique that we all learned in grade school: count to ten before you start throwing fists.
Actually, I do more than count to ten. Here’s a summary of my thinking:
“OK, I’m really annoyed. I know that when I’m annoyed, I don’t always think clearly. I also know that about half the time when I’m annoyed, it’s my own damn fault. So, how am I going to figure out: a) who or what caused this annoyance; b) what I’m going to do about it? I might need some facts here.”
Then I turn to my “go-to” questions. I call them “go-to” questions because I’ve used them often enough that they’re always with me. I may forget the rules of logic, but I can always go to these questions. There are four of them. The first two are for me. I use the last two only after considering the first two and only if there is another person involved in the annoyance incident.
The first two are simply:
By asking these two questions, I can go back to the beginning, recount the process that got me to where I got to, and decide whether I’m on firm ground or not. If not, I can start to make corrections. If I am on firm ground, I can ask the next two questions (of the other person):
These questions have helped me avoid countless misunderstandings. I might say, “Why do you think that?” The other person might say, “Because you said XYZ.” I might then say, “Actually, I didn’t say XYZ. I said ZYX.” Rather than shooting first and asking questions later, we can ask questions and perhaps avoid shooting altogether. It’s a simple approach that often stops an argument before it comes to a boil.
I’ve developed my go-to questions based on years of experience. I always advise my critical thinking students to develop their own go-to questions. In class, we often discuss which questions are most effective in a strategic pause. So, now you can help me teach my class. What are your most effective go-to questions?
When driving home from a party, I may ask Suellen a question like, “Why did Pat make that cutting remark about Kim?” Suellen will then launch into a thorough exegesis about relationships, personal histories, boyfriends, girlfriends, children, parents, gardening, the nature of education, and the tangled web we weave when first we practice to deceive. In the end, it will all make sense — even to me, a socially challenged kind of guy.
Suellen is great at answering questions like these. It’s often referred to as social or emotional intelligence. It’s about people and relationships and empathy. I’m generally better at academic intelligence and questions like how do you calculate the volume of a sphere? (I don’t mean to say that I’m better at academic intelligence than Suellen is … but that I’m better at academic intelligence than I am at social intelligence. I hope that’s clear… I wouldn’t want my lack of social intelligence to lead me to insult my own wife.)
For me, two intelligences — academic and social — have been quite enough. But not for Howard Gardner. In Five Minds for the Future, Gardner suggests that there are five different intelligences and, if education is to succeed in the future, we need to teach them all.
I’m fairly well versed in the tenets of critical thinking. Now I’m trying to understand Gardner’s theory of multiple intelligences. Why? Because I’d like to mash up critical thinking and multiple intelligences. I’m wondering if critical thinking works the same way in each intelligence. Can you think critically in say, academic intelligence, while thinking uncritically in social intelligence? That’s certainly the stereotype of the absent-midned professor.
To mash up critical thinking and the five minds, let’s first look at Gardner’s theory. The five minds are:
Disciplined mind — to master the way of thinking associated with a specific discipline — say, economics, psychology, or mathematics. I think (hope) it’s also broader than that. I’m certainly trained in the Western way of thinking. I categorize and classify things without even thinking about it. I’m now looking at Zen as a different way of thinking — one that destroys categories rather than creates them. That’s certainly a different discipline.
Synthesizing mind — the ability to put it altogether. Gardner points out that memorization was important in times characterized by low literacy. In today’s era of Big Data, synthesis is much more important and memorization much less important.
Creating mind — proposing new ideas, fresh questions, unexpected answers. As I’ve noted before in this blog, a new idea is often a mashup of multiple existing ideas. To propose something that doesn’t exist, you need to be well versed in what does exist.
Respectful mind — “… notes and welcomes differences between human individuals and between human groups….” This is very similar to the concept of fair mindedness as used in critical thinking. This could be our first mashup.
Ethical mind — how can we serve purposes beyond self-interest and how can “citizens…work unselfishly to improve the lot of all.” Again, this is quite similar to concepts used in critical thinking, including ethical thinking and the ability to overcome egocentric thinking.
Today, I simply want to introduce Gardner’s five minds. In future posts, I’ll try to weave together critical thinking, Gardner’s concepts of multiple intelligences, and the Hofstedes’ research on the five dimensions of culture. I hope you’ll tag along.
By the way, the volume of a sphere in 4/3∏r³.
I used to teach research methods. Now I teach critical thinking. Research is about creating knowledge. Critical thinking is about assessing knowledge. In research methods, the goal is to create well-designed studies that allow us to determine whether something is true or not. A well-designed study, even if it finds that something is not true, adds to our knowledge. A poorly designed study adds nothing. The emphasis is on design.
In critical thinking, the emphasis is on assessment. We seek to sort out what is true, not true, or not proven in our info-sphere. To succeed, we need to understand research design. We also need to understand the logic of critical thinking — a stepwise progression through which we can discover fallacies and biases and self-serving arguments. It takes time. In fact, the first rule I teach is “Slow down. Take your time. Ask questions. Don’t jump to conclusions.”
In both research and critical thinking, a key question is: how do we know if something is true? Further, how do we know if we’re being fair minded and objective in making such an assessment? We discuss levels of evidence that are independent of our subjective experience. Over the years, thinkers have used a number of different schemes to categorize evidence and evaluate its quality. Today, the research world seems to be coalescing around a classification of evidence that has been evolving since the early 1990s as part of the movement toward evidence-based medicine (EBM).
The classification scheme (typically) has four levels, with 4 being the weakest and 1 being the strongest. From weakest to strongest, here they are:
You might keep this guide in mind as you read your daily newspaper. Much of the “evidence” that’s presented in the media today doesn’t even reach the minimum standards of Level 4. It’s simply opinion. Stating opinions is fine, as long as we understand that they don’t qualify as credible evidence.
Some years ago, Suellen, Elliot, and I flew from Los Angeles to Sydney, Australia — a long, somewhat dreary, overnight flight with several hundred people on a jumbo jet. The flight was smooth and uneventful with one bizarre exception. About six hours into the flight — as we were all trying to sleep — the plane’s oxygen masks suddenly deployed and fell into our laps. Nothing seemed wrong. There was no noise or bumping or vibration or swerving. Just oxygen masks in our laps. We woke up, looked around at the other passengers, concluded that nothing was wrong … and ignored the masks.
It turned out that we were right. The pilot announced that someone had “pushed the wrong button” in the cockpit and released the masks. He advised us to ignore the masks which we were already successfully doing. Later in the flight, I spoke with a flight attendant who told me she was shocked that none of the passengers had followed the “proper procedures” and donned the masks. I said that it seemed obvious that it wasn’t an emergency. She asked, “How did you know that?” I said, “By looking at the other passengers. Nobody was scared.”
I was reminded of this incident as I was re-reading (yet again) a chapter in Robert Cialdini’s book, Influence. Our little adventure on the airplane was a classic example of social proof. When we’re in an ambiguous situation and not sure what’s happening, one of the first things we do is to look at other people. If they’re panicking, then maybe we should too. If they’re calm, we can relax.
Cialdini points out that social proof affects us even when we’re aware of it. The example? Laugh tracks on TV. We all claim to dislike laugh tracks and also claim that they have no effect on us. But experimental research suggests otherwise. When people watch a TV show with a laugh track, they laugh longer and harder than other people watching the same show without the track. We realize that we’re being manipulated but we still succumb. According to Cialdini, the effect is more pronounced with bad jokes than with good ones. If so, Seth McFarlane clearly needed a laugh track at this year’s Academy Awards.
Cialdini refers to one of the problems of social proof as pluralistic ignorance. I looked around at other people on the airplane and they seemed calm and unfazed. At the same time, they were looking at me and I seemed … well, calm and unfazed. As I looked at them, I thought, “No need to get excited”. As they looked at me, they thought the same. None of us knew what was really going on but we were influencing each other to ignore a potentially life-threatening emergency.
Cialdini argues that pluralistic ignorance makes “safety in numbers” meaningless. (See also my post on the risky shift). Cialdini cites research on staged emergencies — a person apparently has an epileptic seizure. The person is helped “…85 percent of the time when there was a single bystander present but only 31 percent of the time with five bystanders present.” A single bystander seems to assume “if in doubt, help out”. Multiple bystanders look at each other and conclude that there’s no emergency.
So, what to do? If you have to have a heart attack, do it when only one other person is around.