Strategy. Innovation. Brand.

critical thinking

Self-Herding At Breakfast

Just like Grandma served.

Just like Grandma served.

I’ve always believed that breakfast is the most important meal of the day. Why? Because my mother told me so. Why did she believe it? Because her mother told her so. Who told her? Probably Edward Bernays, “the father of public relations.”

Is it true that breakfast is the most important meal of the day? Well, maybe not. If not, I’ve been self-herding for most of my life. I reached a decision (without much thinking) that breakfast was important. My only evidence was my mother’s advice.

Making the decision may have been a mistake. But, c’mon … she was my Mom. The more egregious mistake is that I never doubled back on the decision to see if anything had changed. I made the decision and never thought about it again. I self-herded into a set of fixed behaviors.

I also suffered from the confirmation bias. Researchers published articles from time to time confirming that breakfast is important. These studies confirmed what I already believed. Since the studies didn’t challenge my mental framework, I didn’t bother to check them closely. I just assumed that they were good science.

As it turns out, those studies were based on observations. Researchers observed people’s behavior and noted that people who ate breakfast were also generally healthier and less likely to be obese compared to people who didn’t. Clearly, breakfast is important.

But let’s think about this critically. There are at least three possible relationships between and among the variables:

  • Eating breakfast causes people to be healthier – breakfast causes health
  • Healthier people eat breakfast more than unhealthy people – health causes breakfast
  • Healthier people eat breakfast and also do other things that contribute to good health – hidden variable(s) lead to healthiness and also cause people to eat breakfast.

With observational studies, researchers can’t easily sort out what causes what.

So James Betts and his colleagues did an experimental study – as opposed to an observational study – on the relationship between breakfast and good health. (The original article is here. The popular press has also covered the story including the New York Times, Time magazine, and Outside magazine).

Betts’ research team randomly assigned people to one of two groups. One group had to eat breakfast every day; the other group was not allowed to do any such thing. This isolates the independent variable and allows us to establish causality.

The trial ran for six weeks. The result: nothing. The researchers found no major health or weight differences between the two groups.

But previous research had found a correlation between breakfast and good health. So what caused what? It was probably a cluster of hidden variables. Betts noted, for instance, “…the breakfast group was much more physically active than the fasting group, with significant differences particularly noted during light-intensity activities during the morning.”

So it may not be breakfast that creates healthier outcomes. It may be that breakfast eaters are also more physically active. Activity promotes wellness, not breakfast.

If that’s true, I’ve been self-herding for many years. I didn’t re-check my sources. If I had, I might have discovered that Edward Bernays launched a PR campaign in the 1920s to encourage people to eat a hearty breakfast, with bacon and eggs. Bernays was working for a client – Beech-Nut Packing Company – that sold pork products, including bacon. I suspect the campaign influenced my grandmother who, in turn, influenced my mother who, in turn, influenced me. The moral of the story: check your sources, re-check them periodically, and be suspicious of observational studies. And don’t believe everything that your mother tells you.

(By the way, I recently published two short articles about the effects of chocolate and sex on cognition. Both of these articles were based on observational studies. Caveat emptor).

Chocolate Brain

save the earth chocolateClose readers of this website will remember that my sister, Shelley, is addicted to chocolate. Perhaps it’s because of the bacteria in her microbiome. Perhaps it’s due to some weakness in her personality. Perhaps it’s not her fault; perhaps it is her fault. Mostly, I’ve written about the origins of her addiction. How did she come to be this way? (It’s a question that weighs heavily on a younger brother).

There’s another dimension that I’d like to focus on today: the outcome of her addiction. What are the results of being addicted to chocolate? As it happens, my sister is very smart. She’s also very focused and task oriented. She earned her Ph.D. in entomology when she was 25 and pregnant with her second child. Could chocolate be the cause?

I thought about this the other day when I browsed through the May issue of Appetite, a scientific journal reporting on the relationship between food and health. The tittle of the article pretty much tells the story: “Chocolate intake is associated with better cognitive function: The Maine-Syracuse Longitudinal Study”.

The Maine-Syracuse Longitudinal Study (MSLS) started in 1974 with more than 1,000 participants. Initially, the participants all resided near Syracuse, New York. The study tracks participants over time, taking detailed measurements of cardiovascular and cognitive health in “waves” usually at five-year intervals.

The initial waves of the study had little to do with diet and nothing to do with chocolate. In the sixth wave, researchers led by Georgina Crichton decided to look more closely at dietary variables. The researchers focused on chocolate because it’s rich in flavonoids and “The ability of flavonoid-rich foods to improve cognitive function has been demonstrated in both epidemiological studies … and clinical trials.” But the research record is mixed. As the authors point out, studies of “chronic” use of chocolate “…have failed to find any positive effects on cognition.”

So, does chocolate have long-term positive effects on cognition? The researchers gathered data on MSLS participants, aged 23 to 98. The selection process removed participants who suffered from dementia or had had severe strokes. The result was 968 participants who could be considered cognitively normal.

Using a questionnaire, the researcher asked participants about their dietary habits, including foods ranging from fish to vegetables to dairy to chocolate. The questionnaire didn’t measure the quantity of food that participants consumed. Rather it measured how often the participant ate the food – measured as the number of times per week. The researchers used a variety of tests to measure cognitive function.

And the results? Here’s the summary:

  • Women ate chocolate more frequently than men;
  • Those who ate chocolate consumed more calories overall “..but significantly less alcohol”.
  • “All cognitive scores were significantly higher in those who consumed chocolate at least once per week, than in those who never/rarely consumed chocolate.”
  • “Chocolate intake was significantly and positively associated with…” six different measures of cognitive function.

Seems pretty clear, eh? But this isn’t an experiment, so it’s difficult to say that chocolate caused the improved function. It could be that participants with better cognition simply chose to eat more chocolate. (Seems reasonable, doesn’t it?).

So the researchers delved a little deeper. They studied the cognitive assessments of participants who had taken part in earlier waves of the study. If cognition caused chocolate consumption (rather than the other way round), then people who eat more chocolate today should have had better cognitive scores in earlier waves of the study. That was not the case. This doesn’t necessarily prove that chocolate consumption causes better cognition. But we can probably reject the hypothesis that smarter people choose to eat more chocolate.

So what does this say about my sister? She’s still a pretty smart cookie. But she might be even smarter if she ate more chocolate. That’s a scary thought.

Human 2.0

Human 2.0

Human 2.0

When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.

Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.

I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.

Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?

As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.

But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.

So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.

When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.

But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.

If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.

If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.

Questions or Answers?

Question or answer?

Question or answer?

Which is more important: questions or answers?

Being a good systems thinker, I used to think the answer was obvious: answers are more important than questions. You’re given a problem, you pull it apart into its subsystems, you analyze them, and you develop solutions.

But what if you’re analyzing the wrong problem?

I thought about this yesterday when I read a profile of Alejandro Aravena, the Chilean architect who just won the Pritzker Prize. Aravena and his colleagues – as you might imagine – develop some very creative ideas. They do so by focusing on questions rather than answers. (Aravena’s building at the Universidad Católica de Chile is pictured).

In 2010, for instance, Aravena’s firm, Elemental, was selected to help rebuild the city of Constitución after it was hit by an earthquake and tsunami. I would have thought that they would focus on the built environment – buildings, infrastructure, and so on. They’re architects, after all. Isn’t that what architects do?

But Aravena explains it differently:

“We asked the community to identify not the answer, but what was the question,” Mr. Aravena said. This, it turned out, was how to manage rainfall, so the firm designed a forest that could help prevent flooding.

Architects, then, designed a forest instead of a building. If they were thinking about answers rather than questions, they might have missed this altogether.

On a smaller scale, I had a similar experience early in my career when I worked for Solbourne Computer. We build very fast computers – in 1988, Electronics magazine named our high-end machine the computer of the year. Naturally, we positioned our messages around speed, advanced technology, and throughput.

But our early customers were actually buying something else. When we interviewed our first dozen customers, we found that they were all men, in their early thirties, and that they had recently been promoted to replace an executive who had been in place for many years. They bought our computers to mark the changeover from the old regime to the new regime. They were meeting a sociological need as much as a technical need.

When you go to a gas station to fill your car’s tank, you may imagine that you’re buying gasoline. But, as the marketing guru Ted Levitt pointed out long ago, you’re really buying the right to continue driving your car. It’s a different question and a broader perspective and may well lead you to more creative ways to continue driving.

More recently, another marketing guru, Daniel Pink, wrote that products and services “… are far more valuable when your prospect is mistaken, confused or completely clueless about their true problem.” So often our market research focuses on simple questions about obvious problems. The classic question is, “What keeps you up at night?” We identify an obvious problem and then propose a solution. Meanwhile, our competitors are identifying the same problem and proposing their solutions. We’re locked into the same problem space.

But if we step back, look around, dig a little deeper, observe more creatively, and ask non-obvious questions, we may find that the customer actually needs something completely different. Different than what they imagined – or we imagined or our competitors imagined. They may, in fact, need a forest not a building.

Sending A Memo To Your Future Self

Memo to self...

Memo to self…

We know a lot about the future. We can’t predict it precisely but we can often see the general contours of what’s coming. With a little imagination, we can prepare for it. We just need a structure to hang our imagination on.

As an example, let’s take organizations that are undergoing rapid and/or stressful change. We know a lot about such organizations. We know, for instance, that:

  • Communication suffers – people are distracted and don’t listen well. Bain estimates that only 20% of the information communicated actually gets through. Attention spans get shorter than ever. Tip: don’t give long speeches.
  • Memory becomes less accurate – stress affects memory in odd ways. Even in normal times, different people remember the same event in different ways. It gets worse in stressed out organizations.
  • We hear mixed and contradictory messages – change doesn’t happen smoothly across the organization. Some departments move quickly; others move slowly. When we talk to different people, we’ll hear different messages. It’s hard to tell what’s really going on.
  • We jump to conclusions more urgently — as the Heath brothers point out, we jump to conclusions all the time. Stress makes us even more jumpy. We’re anxious to get a solution and don’t take the time to consider the evidence.
  • Trust withers – it’s hard to trust people when we remember things differently, hear different messages, and jump to different conclusions.

I could go on but you get the picture. We also know that organizational change happens in three phases. At least, that’s what the theorists tell us. Here are four different models of the change process (here, here, here, and here). They use different descriptors but all four describe three distinct phases of change. Note that the middle phase is a trough – that’s where the going gets tough.

The trick to preparing for the future is to start imagining it before we get to the trough. Change managers refer to the trough with words like frustration, depression, resistance, and chaos. It’s not a good time for imagining.

So we start the imagination process in Phase 1. We’re still cool, calm, and collected. We can think more or less clearly – especially if we’ve studied critical thinking. We can think about the future dispassionately and plan how we want to behave.

We sit down in groups and discuss the issues we can anticipate in Phases 2 and 3. We know, for instance, that we’re likely to hear contradictory messages. How do we want to behave when we do? What can we do now to outline “best behaviors” for the stress created by contradictory messages? What can we do to ensure that we actually implement the best behaviors? What else might happen in the trough? How do we want to behave when it happens? We talk, discuss, debate, imagine, and agree.

We then write down what we’ve agreed to. In effect, we’re writing a memo from our current selves to our future selves. From our cool, calm, dispassionate selves to our stressed and anxious future selves. We make clearheaded decisions in Phase 1. When we get to Phase 2, we can refer back to our own wisdom to help govern our actions

I call this process Structured Imagination™. What we know about the future gives us the structure. We use the structure to focus our imaginations. We imagine what will happen and how we’ll behave when it does. This prepares us for the hurly burly of change and also vaccinates us against many of the ill effects of the trough.

Structured Imagination is not a perfect process – the future may still throw us a curve every now and then. However, I’ve used the process with multiple clients and they say that they face the future with greater confidence and clarity. That’s pretty good. If you’d like me to do a Structured Imagination workshop with your organization, just drop me a line.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives