A study published last week in the British Medical Journal states simply that, “Childhood intelligence was inversely associated with all major causes of death.”
The study focused on some 65,000 men and women who took the Scottish Mental Survey in 1947 at age 11. Those students are now 79 years old and many of them have passed away. By and large, those who scored lower on the test in 1947 were more likely to have died – from all causes — than those who registered higher scores.
This is certainly not the first study to link intelligence with longevity. (Click here, here, and here, for instance). But it raises again a fundamental question: why would smarter people live longer? There seem to be at least two competing hypotheses:
Hypothesis A: More intelligent people make better decisions about their health care, diet, exercise, etc. and — as a result — live longer.
Hypothesis B: whatever it is that makes people more intelligent also makes them healthier. Researchers, led by Ian Deary, call this hypothesis system integrity. Essentially, the theory suggests that a healthy system generates numerous positive outcomes, including greater intelligence and longer life. The theory derives from a field of study known as cognitive epidemiology, which studies the relationships between intelligence and health.
Hypothesis A focuses on judgment and decision making as causal factors. There’s an intermediate step between intelligence and longevity. Hypothesis B is more direct – the same factor causes both intelligence and longevity. There is no need for an intermediate cause.
The debate is oddly similar to the association between attractiveness and success. Sociologists have long noted that more attractive people also tend to be more successful. Researchers generally assumed that the halo effect caused the association. People judged attractive people to be more capable in other domains and thus provided them more opportunities to succeed. This is similar to our Hypothesis A – the result depends on (other people’s) judgment and there is an intermediate step between cause and effect.
Yet a recent study of Tour de France riders tested the notion that attractiveness and success might have a common cause. Researchers rated the attractiveness of the riders and compared the rankings to race results. They found that more attractive riders finished in higher positions in the race. Clearly, success in the Tour de France does not depend on the halo effect so, perhaps, that which causes the riders to be attractive may also cause them to be better racers.
And what about the relationship between intelligence and longevity? Could the two variables have a single, common cause? Perhaps the best study so far was published in the International Journal Of Epidemiology last year. The researchers looked at various twin registries and compared IQ tests with mortality. The study found a small (but statistically significant) relationship between IQ and longevity. In other words, the smarter twin lived longer. Though the effects are small, the researchers conclude, “The association between intelligence and lifespan is mostly genetic.”
Are they right? I’m not (yet) convinced. Though significant, the statistical relationship is very small with r = 0.12. As noted elsewhere, variance is explained by the square of r. So in this study, IQ explains only 1.44% (0.12 x 0.12 x 100) of the variance in longevity. That seems like weak evidence to conclude that the relationship is “mostly genetic”.
Still, we have some interesting research paths to follow up on. If the theory of system integrity is correct, it could predict a whole host of relationships, not just IQ and longevity. Attractiveness could also be a useful variable to study. Perhaps there’s a social aspect to it as well. Perhaps people who are healthy and intelligent also have a larger social circle (see also Dunbar’s number). Perhaps they’re more altruistic. Perhaps they are more symmetric. Ultimately, we may find that a whole range of variables depend partially – or perhaps mainly – on genetics.
Remember Tetris? Originally released in Russia in 1984, it became the top selling video game of all time, with more than 495 million copies in circulation. It’s a simple game – different shaped tiles fall from the top of the screen and you arrange them in patterns at the bottom of the screen.
It seems like a simple-minded time killer. It’s not rocket science. But it turns out to have some interesting effects on our brains. Here are two recent studies.
Tetris and Intrusive Memories
Lets say you’re in a car accident. The trauma, whether physical or psychological, may result in intrusive memories, which are hallmarks of post-traumatic stress disorder (PTSD).
When an intrusive memory occurs, the survivor essentially relives the traumatic event. It seems plausible that reducing intrusive memories would help survivors manage stress and maintain metal health. So, how might one reduce intrusive memories or prevent their formation in the first place? How about Tetris?
This was the hypothesis of a study recently published in Molecular Psychiatry. Researchers recruited 71 subjects in an emergency room at a hospital in Oxford, England. The subjects had recently (less then six hours previously) experienced an automobile accident and were randomly assigned to one of two groups:
Researchers contacted the subjects one week and one month after the accident. The result? Subjects who played Tetris “recorded significantly fewer intrusive memories” and “reported less distress from intrusion symptoms.”
Tetris and Your Cortex
What about people who aren’t involved in a traumatic event? Does Tetris have an impact on them? This was the question asked several years ago in a study conducted by researchers at the University of New Mexico.
The researchers recruited 26 girls, aged 12 to 15 and randomly assigned them to the experimental group or the control group. The researchers taught the girls in the experimental group to play Tetris and coached them to play regularly. The girls in the control group were coached not to play Tetris. The researchers followed the two groups for three months. During that time the girls in the Tetris group played the game approximately 90 minutes per week.
At the end of three months, the Tetris-playing girls had a “significantly thicker cortex” than the non-Tetris-playing girls. The cortex is gray matter and is generally associated with higher-level brain functions such as memory, attention, and planning.
Does this mean that playing Tetris will make your smarter or your brain more efficient? Probably not. Playing Tetris probably only makes you better at playing Tetris. But it’s more evidence that the brain is plastic; you can change it by how you behave. It’s not surprising in a group of youngsters whose brains are not yet mature. It might be very telling to replicate the experiment with a group to oldsters to see just how plastic their brains are. My hypothesis: there’s still a lot of plasticity left.
So Tetris can teach us about brain plasticity and help suppress intrusive memories. Not bad for a free video game. I wonder what else it can do.
I used to think it was difficult to manage conflict. Now I wonder if it isn’t more difficult to manage agreement.
A conflicted organization is fairly easy to analyze. The signs are abundant. You can quickly identify the conflicting groups as well as the members of each. You can identify grievances simply by talking with people. You can figure out who is “us” and who is “them”. Solving the problem may prove challenging but, at the very least, you know two things: 1) there is a problem; 2) its general contours are easy to see.
When an organization is in agreement, on the other hand, you may not even know that a problem exists. Everything floats along smoothly. People may not quiver with enthusiasm but no one is throwing furniture or shouting obscenities. Employees work and things get done.
The problem with an organization in agreement is that many participants actually disagree. But the disagreement doesn’t bubble up and out. There are at least two scenarios in which this happens:
Similar paradoxes arise in organizations all the time. Each employee wants to be seen as a team player. They may have reservations about a decision but — because everyone else agrees or seems to agree — they keep quiet. Perhaps nobody agrees to a given project but they believe that everyone else does. Perhaps nobody wants to work on Project X. Nevertheless, Project X persists. Unlike a conflicted organization, nobody realizes that a problem exists.
The second scenario is perhaps more dangerous but less common. A fear-based culture – if left untreated – will eventually corrupt the entire organization. Employees grow afraid of telling the truth. The remedy is easy to discern but hard to execute: the organization needs to replace executive management and create a new culture.
The Abilene paradox is perhaps less dangerous but far more common. Any organization that strives to “play as a team” or “hire team players” is at risk. Employees learn to go along with the team, even if they believe the team is wrong.
What can be done to overcome the Abilene paradox in an organization? Rosabeth Moss Kanter points out that there are two parts to the problem. First, employees make inaccurate assumptions about what others believe. Second, even though they disagree, they don’t feel comfortable speaking up. A good manager can work on both sides of the problem. Kanter suggests the following:
The activities needed to ward off the Abilene paradox are not draconian. Indeed, they’re fairly easy to implement. But you can only implement them if you realize that a problem exists. That’s the hard part.
Over the past several years, I’ve written several articles about cognitive biases. I hope I have alerted my readers to the causes and consequences of these biases. My general approach is simple: forewarned is forearmed.
I didn’t realize that I was participating in a more general trend known as debiasing. As Wikipedia notes, “Debiasing is the reduction of bias, particularly with respect to judgment and decision making.” The basic idea is that we can change things to help people and organizations make better decisions.
What can we change? According to A User’s Guide To Debiasing, we can do two things:
I’ve been using a Type 1 approach. I’ve aimed at modifying the decision maker by providing information about the source of biases and describing how they skew our perception of reality. We often aren’t aware of the nature of our own perception and judgment. I liken my approach to making the fish aware of the water they’re swimming in. (To review some of my articles in this domain, click here, here, here, and here).
What does a Type 2 approach look like? How do we modify the environment? The general domain is called choice architecture. The idea is that we change the process by which the decision is made. The book Nudge by Richard Thaler and Cass Sunstein is often cited as an exemplar of this type of work. (My article on using a courtroom process to make corporate decisions fits in the same vein).
How important is debiasing in the corporate world? In 2013, McKinsey & Company surveyed 770 corporate board members to determine the characteristics of a high-performing board. The “biggest aspiration” of high-impact boards was “reducing decision biases”. As McKinsey notes, “At the highest level, boards look inward and aspire to more ‘meta’ practices—deliberating about their own processes, for example—to remove biases from decisions.”
More recently, McKinsey has written about the business opportunity in debiasing. They note, for instance, that businesses are least likely to question their core processes. Indeed, they may not even recognize that they are making decisions. In my terminology, they’re not aware of the water they’re swimming in. As a result, McKinsey concludes “…most of the potential bottom-line impact from debiasing remains unaddressed.”
What to do? Being a teacher, I would naturally recommend training and education programs as a first step. McKinsey agrees … but only up to a point. McKinsey notes that many decision biases are so deeply embedded that managers don’t recognize them. They swim blithely along without recognizing how the water shapes and distorts their perception. Or, perhaps more frequently, they conclude, “I’m OK. You’re Biased.”
Precisely because such biases frequently operate in System 1 as opposed to System 2, McKinsey suggests a program consisting of both training and structural changes. In other words, we need to modify both the decision maker and the decision environment. I’ll write more about structural changes in the coming weeks. In the meantime, if you’d like a training program, give me a call.
The movie Apollo 13 came out in 1995 and popularized the phrase “Failure is not an option”. The flight director, Gene Kranz (played by Ed Harris), repeated the phrase to motivate engineers to find a solution immediately. It worked.
I bet that Kranz’s signature phrase caused more failures in American organizations than any other single sentence in business history. I know it caused myriad failures – and a culture of fear – in my company.
Our CEO loved to spout phrases like “Failure is not an option” and “We will not accept failure here.” It made him feel good. He seemed to believe that repeating the mantra could banish failure forever. It became a magical incantation.
Of course, we continued to have failures in our company. We built complicated software and we occasionally ran off the rails. What did we do when a failure occurred? We buried it. Better a burial than a “public hanging”.
The CEO’s mantra created a perverse incentive. He wanted to eliminate failures. We wanted to keep our jobs. To keep our jobs, we had to bury our failures. Because we buried them, we never fixed the processes that led to the failures in the first place. Our executives could easily conclude that our processes were just fine. After all, we didn’t have any failures, did we?
As we’ve learned elsewhere, design thinking is all about improving something and then improving it again and then again and again. How can we design a corporate culture that continuously improves?
One answer is the concept of the just culture. A just culture acknowledges that failures occur. Many failures result from systemic or process problems rather than from individual negligence. It’s not the person; it’s the system. A just culture aims to improve the system to 1) prevent failure wherever possible or; 2) to ameliorate failures when they do occur. In a sense, it’s a culture designed to improve itself.
According to Barbara Brunt, “A just culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control.” Rather than hiding system failures, a just culture encourages employees to report them. Designers can then improve the systems and processes. As the system improves, the culture also improves. Employees realize that reporting failures leads to good outcomes, not bad ones. It’s a virtuous circle.
The concept of a just culture is not unlike appreciative inquiry. Managers recognize that most processes work pretty well. They appreciate the successes. Failure is an exception – it’s a cause for action and design thinking as opposed to retribution. We continue to appreciate the employee as we redesign the process.
The just culture concept has established a firm beachhead among hospitals in the United States. That makes sense because hospital mistakes can be especially tragic. But I wonder if the concept shouldn’t spread to a much wider swath of companies and agencies. I can certainly think of a number of software companies that could improve their quality by improving their culture. Ultimately, I suspect that every organization could benefit by adapting a simple principle of just culture: if you want to improve your outcomes, recruit your employees to help you.
I’ve learned a bit about just culture because one of my former colleagues, Kim Ross, recently joined Outcome Engenuity, the leading consulting agency in the field of just culture. You can read more about them here. You can learn more about hospital use of just culture by clicking here, here, and here.