I’m a pretty good driver. How do I know? I can observe other drivers and compare their skills to mine. I see them making silly mistakes. I (usually) avoid those mistakes myself. QED: I must be a better-than-average driver. I’d like to stay that way and that motivates me to practice my driving skills.
Using observation and comparison, I can also conclude that I’m not a very good basketball player. I can observe what other players do and compare their skills to mine. They’re better than I am. That may give me the motivation to practice my hoops skills.
Using observation and comparison I can conclude that I’m better at driving the highway than at driving the lane. But how do I know if I’m a good thinker or not? I can’t observe other people thinking. Indeed, according to many neuroscientists, I can’t even observe myself thinking. System 1 thinking happens below the level of conscious awareness. So I can’t observe and compare.
Perhaps I could compare the results of thinking rather than thinking itself. People who are good thinkers should be more successful than those who aren’t, right? Well, maybe not. People might be successful because they’re lucky or charismatic, or because they were born to the right parents in the right place. I’m sure that we can all think of successful people who aren’t very good thinkers.
So, how do we know if we’re good thinkers or not? Well, most often we don’t. And, because we can’t observe and compare, we may not have the motivation to improve our thinking skills. Indeed, we may not realize that we can improve our thinking.
I see this among the students in my critical thinking class. Students will have varying opinions about their own thinking skills. But most of them have not thought about their thinking and how to improve it.
Some of my students seem to think they’re below average thinkers. In their papers, they write about the mistakes they’ve made and how they berate themselves for poor thinking. They can’t observe other people making the same mistake so they assume that they’re the only ones. Actually, the mistakes seem fairly commonplace to me and I write a lot of comments along these lines, “Don’t beat yourself over this. Everybody make this mistake.”
Some of my students, of course, think they’re above average thinkers. Some (though not many) think they’re about average. But I think the single largest group – maybe not a majority but certainly a plurality – think they’re below average.
I realized recently that the course aims to build student confidence (and motivation) by making thinking visible. When we can see how people think, then we can observe and compare. So we look at thinking processes and catalog the common mistakes people make. As we discuss these patterns, I often hear students say, “Oh, I thought I was the only one to do that.”
In general, students get the hang of it pretty quickly. Once they can observe external patterns and processes, they’re very perceptive about their own thinking. Once they can make comparisons, they seem highly motivated to practice the arts of critical thinking. It’s like driving or basketball – all it takes is practice.
We have two old sayings that directly contradict each other. On the one hand, we say, “Look before you leap.” On the other hand, “He who hesitates is lost.” So which is it?
I wasn’t thinking about this conundrum when I assigned the debacles paper in my critical thinking class. Even so, I got a pretty good answer.
The critical thinking class has two fundamental streams. First, we study how we think and make decisions as individuals, including all the ways we trick ourselves. Second, we study how we think and make decisions as organizations, including all the ways we trick each other.
For organizational decision making, students write a paper analyzing a debacle. For our purposes, a debacle is defined by Paul Nutt in his book Why Decisions Fail: “… a decision riddled with poor practices producing big losses that becomes public.” I ask students to choose a debacle, use Nutt’s framework to analyze the mistakes made, and propose how the debacle might have been prevented.
Students can choose “public” debacles as reported in the press or debacles that they have personally observed in their work. In general, students split about half and half. Popular public debacles include Boston’s Big Dig, the University of California’s logo fiasco, Lululemon’s see-through pants, the Netflix rebranding effort, JC Penney’s makeover, and the Gap logo meltdown. (What is it with logos?)
This quarter, students analyzed 18 different debacles. As I read the papers, I kept track of the different problems the students identified and how frequently they occurred. I was looking specifically for the “blunders, traps, and failure-prone practices” that Nutt identifies in his book.
Five of Nutt’s issues were reported in 50% or more of the papers. Here’s how they cropped up along with Nutt’s definition of each.
Premature commitment – identified in 13 papers or 72.2% of the sample. Nutt writes that “Decision makers often jump on the first idea that comes along and then spend years trying to make it work. …When answers are not readily available grabbing onto the first thing that seems to offer relief is a natural impulse.” (I’ve also written about this here).
Ambiguous direction – 11 papers or 61.1%. Nutt writes, “Direction indicates a decision’s expected result. In the debacles [that Nutt studied], directions were either misleading, assumed but never agreed to, or unknown.”
Limited search, no innovation – ten papers or 55.5%. According to Nutt, “The first seemingly workable idea … [gets] adopted. Having an ‘answer’ eliminates ambiguity about what to do but stops others from looking for ideas that could be better.”
Failure to address key stakeholders claims – ten papers or 55.5%. Stakeholders make claims based on opportunities or problems. The claims may be legitimate or they may be politically motivated. They may be accurate or inaccurate. Decision makers need to understand the claims as thoroughly as possible. Failure to do so can alienate the stakeholders and produce greater contention in the process.
Issuing edicts – nine papers or 50%. Nutt: “Using an edict to implement … is high risk and prone to failure. People who have no interest in the decision resist it because they do not like being forced and they worry about the precedent that yielding to force sets.”
As you make your management decisions, keep these Big Five in mind. They occur regularly, they’re inter-related, and they seem to cut deeply. The biggest issue is premature commitment. If you jump on an idea before its time, you’re more likely to fail than succeed. So, perhaps we’ve shown that look before you leap is better wisdom than he who hesitates is lost.
I’m reading a delightful book by Maria Konnikova, titled Mastermind: How To Think Like Sherlock Holmes. It covers much of the same territory as other books I’ve read on thinking, deducing, and questioning but it reads more like … well, like a detective novel. In other words, it’s fun.
In the past, I’ve covered Daniel Kahneman’s book, Thinking Fast and Slow. Kahneman argues that we have two thinking systems. System 1 is fast and automatic and always on. We make millions of decisions each day but don’t think about the vast majority of them; System 1 handles them. System 1 is right most of the time but not always. It uses rules of thumb and makes common errors (which I’ve cataloged here, here, here, and here).
System 1 can also invoke System 2 – the system we think of when we think of thinking. System 2 is where we logically process data, make deductions, and reach conclusions. It’s very energy intensive. Thinking is tiring, which is why we often try to avoid it. Better to let System 1 handle it without much conscious thought.
Kahneman illustrates the differences between System 1 and System 2. Konnikova covers some of the same territory but with slightly different terminology. Konnikova renames System 1 as System Watson and System 2 as System Holmes. Konnikova proceeds to analyze System Holmes and reveal what makes it so effective.
Though I’m only a quarter of the way through the book, I’ve already gleaned a few interesting tidbits, such as these:
Motivation counts – motivated thinkers are more likely to invoke System Holmes. Less motivated thinkers are willing to let System Watson carry the day. Konnikova points out that thinking is hard work. (Kahneman makes the same point repeatedly). Motivation helps you tackle the work.
Unitasking trumps multitasking – Thinking is hard work. Thinking about multiple things simultaneously is extremely hard work. Indeed, it’s virtually impossible. Konnikova notes that Holmes is very good at one essential skill: sitting still. (Pascal once remarked that, “All of man’s problems stem from his inability to sit still in a room.” Holmes seems to have solved that problem).
Your brain attic needs a spring cleaning – we all have lots of stuff in our brain attics and – like the attics in our houses – a lot of it is not worth keeping. Holmes keeps only what he needs to do the job that motivates him.
Observing is different than seeing – Watson sees. Holmes observes. Exactly how he observes is a complex process that I’ll report on in future posts.
Don’t worry. I’m on the case.
Remember long division? You have a big number and want to divide it by a small number. You draw a little house, put the big number in it, and the small number outside it. Then you start guessing. Roughly how many times will the small number go into the large number?
You’ll be wrong the first time but it doesn’t matter. You start refining. If your first guess is close, you can refine it in a few steps. If your first guess is way off, you’ll need to take more refining steps. Either way, the method works. In fact, it’s pretty much foolproof. Even fourth graders can do it.
I got this example from an elegant little book I’m reading called Intuition Pumps and Other Tools for Thinking by the philosopher, Daniel Dennett. The general idea is that thinkers, like blacksmiths, have to make their own tools. We’ve used some tools for millennia; others are of more recent vintage.
As long division illustrates, one tool is approximation. (Technically, it’s known as heuristics). In the real world, we don’t always have to be precise. We start with a guess and then refine it. The important thing is to make the guess. That’s the ante for getting into the game.
It’s surprising how often this works. In fact, now that I’m thinking about it, I realize that I make guesses all the time. Guessing is certainly a good management tool. In the hurly burly of commerce, it’s not always clear precisely what’s happening. It usually takes accountants four to six weeks to figure out precisely what happened in a given quarter. In the meantime, it’s useful for day-to-day managers to make educated guesses. Learning to make such guesses is a critical skill.
I realized that I also apply this to my writing. I sometimes get writer’s block. I know the argument I want to make but I don’t know how to frame it. I can’t get started. When that happens, I make an effort to just write something, even if it’s sloppy and poorly phrased. Once I have something written down, I can then shift gears. I’m no longer writing; I’m editing. Somehow, that seems much easier.
As you think about thinking, think about guessing. In many cases, you’re more likely to get to a clear thought through approximation than through a brilliant flash of insight.
Does the mind influence the body or vice-versa? It seems that it happens both ways and the vagus nerve plays a key role in keeping you both happy and healthy (and creative).
Also known as the tenth cranial nerve, the vagus nerve links the brain with the lungs, digestive system, and heart. Among many other things, the vagus nerve sends the signals that help you calm down after you’ve been startled or frightened. It helps you return to a “normal” state. A healthy vagus nerve promotes resilience — the ability to recover from stress. (The vagus nerve is not in the spinal cord, meaning that people with spinal cord injuries can still sense much of their system).
The vagus also helps control your heartbeat. To use oxygen efficiently, your heart should beat a bit faster when you breathe in and a bit slower when you breathe out. The ratio of your breathing-in heartbeat to your breathing-out heartbeat is known as the vagal tone.
Physiologists have long known that a higher vagal tone is generally associated with better physical and mental health. Most researchers assumed, however, that we can’t improve our vagal tone. Some lucky people have a higher vagal tone; some unlucky people have a lower one. It’s determined by factors beyond our control.
Then Barbara Fredrickson and Bethany Kok decided to see if that was really true. Their research – published in Psychological Science – suggests that we can improve our vagal tone. By doing so, we can create a virtuous circle: improving our mental outlook can improve vagal tone which, in turn, can make it easier to improve our mental outlook. (For two non-technical articles on this research, click here and here).
In their experiment, Fredrickson and Kok randomly divided volunteers into two groups. One group was taught a form of meditation (loving kindness meditation) that engenders “feelings of goodwill”. Both groups were also asked to keep track of their positive and negative emotions.
The results were fairly straightforward: “All the volunteers … showed an increase in positive emotions and feelings of social connectedness – and the more pronounced this effect, the more their vagal tone had increased ….” Additionally, those who meditated improved their vagal tone much more than those who didn’t.
The virtuous circle seems to be associated with social connectedness, which Fredrickson refers to as a “potent wellness behavior”. The loving kindness meditation promotes a sense of social connectedness. That, in turn, improves vagal tone. That, in turn, promotes a sense of social connectedness. Bottom line: it pays to think positive thoughts about yourself and others.