
It’s not a good idea if I don’t understand it.
One of the most important obstacles to innovation is the cultural rift between technical and non-technical managers. The problem is not the technology per se, but the communication of the technology. Simply put, technologists often baffle non-technical executives and baffled executives won’t support change.
To promote innovation, we need to master the art of speaking between two different cultures: technical and non-technical. We need to find a common language and vocabulary. Most importantly, we need to speak to business needs and opportunities, not to the technology itself.
In my Managing Technology class, my students act as the CIO of a fictional company called Vair. The students study Vair’s operations (in a 12-page case study) and then recommend how technical innovations could improve business operations.
Among other things, they present a technical innovation to a non-technical audience. They always come up with interesting ideas and useful technologies. And they frequently err on the side of being too technical. Their presentations are technically sound but would be baffling to most non-technical executives.
Here are the tips I give to my students on giving a persuasive presentation to a non-technical audience. I thought you might find them useful as well.
Benefits and the so what question – we often state intermediary benefits that are meaningful to technologists but not meaningful to non-technical executives. Here’s an example, “By moving to the cloud, we can consolidate our applications”. Technologists know what that means and can intuit the benefits. Non-technical managers can’t. To get your message across, run a so what dialogue in your head,
Statement: “By moving to the cloud, we can consolidate our applications.”
Question: “So what?”
Statement: “That will allow us to achieve X.”
Question: “So what?”
Statement: “That means we can increase Y and reduce Z.”
Question: “So what?”
Statement: “Our stock price will increase by 12%”
Asking so what three or four times is usually enough to get to a logical end point that both technical and non-technical managers can easily understand.
Give context and comparisons – sometimes we have an idea in mind and present only that idea, with no comparisons. We might, for instance, present J.D. Edwards as if it’s the only choice in ERP software. If you were buying a house, you would probably look at more than one option. You want to make comparisons and judge relative value. The same holds true in a technology presentation. Executives want to believe that they’re making a choice rather than simply rubber-stamping a recommendation. You can certainly guide them toward your preferred solution. By giving them a choice, however, the executives will feel more confident that they’ve chosen wisely and, therefore, will support the recommendation more strongly.
Show, don’t tell – chances are that technologists have coined new jargon and acronyms to describe the innovation. Chances are that non-technical people in the audience won’t understand the jargon — even if they’re nodding their heads. Solution: use stories, analogies, or examples:
Words, words, words – often times we prepare a script for a presentation and then put most of it on our slides. The problem is that the audience will either listen to you or read your slides. They won’t do both. You want them to listen to you – you’re much more important than the slides. You’ll need to simplify your slides. The text on the slide should capture the headline. You should tell the rest of the story.
If you follow these tips, the executives in your audience are much more likely to comprehend the innovation’s benefits. If they comprehend the benefits, they’re much more likely to support the innovation.
(If you’d like a copy of the Vair case study, just send me an e-mail. I’m happy to share it.)

Such difficult choices.
A few days ago I published a brief article, Chocolate Brain, which discussed the cognitive benefits of eating chocolate. Bottom line: people who eat chocolate (like my sister) have better cognition than people who don’t. As always, there are some caveats, but it seems that good cognition and chocolate go hand in hand.
I was headed to the chocolate store when I was stopped in my tracks by a newly published article in the journal, Age and Ageing. The title, “Sex on the brain! Associations between sexual activity and cognitive function in older age” pretty much explains it all. (Click here for the full text).
The two studies – chocolate versus sex – are remarkably parallel. Both use data collected over the years through longitudinal studies. The chocolate study looked at almost 1,000 Americans who have been studied since 1975 in the Maine-Syracuse Longitudinal Study. The sex study looked at data from almost 7,000 people who have participated in the English Longitudinal Study of Aging (ELSA).
Both longitudinal studies gather data at periodic intervals; both studies are now on wave 6. The chocolate study included people aged 23 to 98. The sex study looked only at older people, aged 50 to 89.
Both studies also used standard measures of cognition. The chocolate study used six standard measures of cognition. The sex study used two: “…number sequencing, which broadly relates to executive function, and word recall, which broadly relates to memory.”
Both studies looked at the target variable – chocolate or sex – in binary fashion. Either you ate chocolate or you didn’t; either you had sex – in the last 12 months – or you didn’t.
The results of the sex test differed by gender. Men who were sexually active had higher scores on both number sequencing and word recall tests. Sexually active women had higher scores on word recall but not number sequencing. Though the differences were statistically significant, the “…magnitude of the differences in scores was small, although this is in line with general findings in the literature.”
As with the chocolate study, the sex study establishes an association but not a cause-and-effect relationship. The researchers, led by Hayley Wright, note that the association between sex and improved cognition holds, even “… after adjusting for confounding variables such as quality of life, loneliness, depression, and physical activity.”
So the association is real but we haven’t established what causes what. Perhaps sexual activity in older people improves cognition. Or maybe older people with good cognition are more inclined to have sex. Indeed, two other research papers cited by Wright et. al, studied attitudes toward sex among older people in Padua, Italy and seemed to suggest that good cognition increased sexual interest rather than vice versa. (Click here and here). Still, Wright and her colleagues might use a statistical tool from the chocolate study. If cognition leads to sex (as opposed to the other way round), people having more sex today should have had higher cognition scores in earlier waves of the longitudinal study than did people who aren’t having as much sex today.
So, we need more research. I’m especially interested in establishing whether there are any interactive effects. Let’s assume for a moment that sexual activity improves cognition. Let’s assume the same for chocolate consumption. Does that imply that combining sex and chocolate leads to even better cognition? Could this be a situation in which 1 + 1 = 3? Raise your hand if you’d like to volunteer for the study.
Close readers of this website will remember that my sister, Shelley, is addicted to chocolate. Perhaps it’s because of the bacteria in her microbiome. Perhaps it’s due to some weakness in her personality. Perhaps it’s not her fault; perhaps it is her fault. Mostly, I’ve written about the origins of her addiction. How did she come to be this way? (It’s a question that weighs heavily on a younger brother).
There’s another dimension that I’d like to focus on today: the outcome of her addiction. What are the results of being addicted to chocolate? As it happens, my sister is very smart. She’s also very focused and task oriented. She earned her Ph.D. in entomology when she was 25 and pregnant with her second child. Could chocolate be the cause?
I thought about this the other day when I browsed through the May issue of Appetite, a scientific journal reporting on the relationship between food and health. The tittle of the article pretty much tells the story: “Chocolate intake is associated with better cognitive function: The Maine-Syracuse Longitudinal Study”.
The Maine-Syracuse Longitudinal Study (MSLS) started in 1974 with more than 1,000 participants. Initially, the participants all resided near Syracuse, New York. The study tracks participants over time, taking detailed measurements of cardiovascular and cognitive health in “waves” usually at five-year intervals.
The initial waves of the study had little to do with diet and nothing to do with chocolate. In the sixth wave, researchers led by Georgina Crichton decided to look more closely at dietary variables. The researchers focused on chocolate because it’s rich in flavonoids and “The ability of flavonoid-rich foods to improve cognitive function has been demonstrated in both epidemiological studies … and clinical trials.” But the research record is mixed. As the authors point out, studies of “chronic” use of chocolate “…have failed to find any positive effects on cognition.”
So, does chocolate have long-term positive effects on cognition? The researchers gathered data on MSLS participants, aged 23 to 98. The selection process removed participants who suffered from dementia or had had severe strokes. The result was 968 participants who could be considered cognitively normal.
Using a questionnaire, the researcher asked participants about their dietary habits, including foods ranging from fish to vegetables to dairy to chocolate. The questionnaire didn’t measure the quantity of food that participants consumed. Rather it measured how often the participant ate the food – measured as the number of times per week. The researchers used a variety of tests to measure cognitive function.
And the results? Here’s the summary:
Seems pretty clear, eh? But this isn’t an experiment, so it’s difficult to say that chocolate caused the improved function. It could be that participants with better cognition simply chose to eat more chocolate. (Seems reasonable, doesn’t it?).
So the researchers delved a little deeper. They studied the cognitive assessments of participants who had taken part in earlier waves of the study. If cognition caused chocolate consumption (rather than the other way round), then people who eat more chocolate today should have had better cognitive scores in earlier waves of the study. That was not the case. This doesn’t necessarily prove that chocolate consumption causes better cognition. But we can probably reject the hypothesis that smarter people choose to eat more chocolate.
So what does this say about my sister? She’s still a pretty smart cookie. But she might be even smarter if she ate more chocolate. That’s a scary thought.

We’ll fill it when you’re born.
In his 1984 novel, Neuromancer, that kicked off the cyberpunk wave, William Gibson wrote about a new type of police force. Dubbed the Turing Police, the force was composed of humans charged with the task of controlling non-human intelligence.
Humans had concluded that artificial intelligence – A.I. – would always seek to make itself more intelligent. Starting with advanced intelligence, an A.I. implementation could add new intelligence with startling speed. The more intelligence it added, the faster the pace. The growth of pure intelligence could only accelerate. Humans were no match. A.I. was a mortal threat. The Turing Police had to keep it under control.
Alas, the Turing Police were no match for gangsters, drug runners, body parts dealers, and national militaries. The most threatening A.I. in the novel was “military-grade ice” developed by the Chinese Army. Was Gibson prescient?
If the Turing Police couldn’t control A.I., I wonder if we can. Three years ago, I wrote a brief essay expressing surprise that a computer could grade a college essay better than I could. I thought of grading papers as a messy, fuzzy, subtle task and assumed that no machine could match my superior wit. I was wrong.
But I’m a teacher at heart and I assumed that the future would still need people like me to teach the machines. Again, I was wrong. Here’s a recent article from MIT Technology Review that describes how robots are teaching robots. Indeed, they’re even pooling their knowledge in “robot wikipedias” so they can learn even more quickly. Soon, robots will be able to tune in, turn on, and take over.
So, is there any future for me … or any other knowledge worker? Well, I still think I’m funnier than a robot. But if my new career in standup comedy doesn’t work out, I’m not sure that there’s any real need for me. Or you, for that matter.
That raises an existential question: are humans needed? We’ve traditionally defined “need” based on our ability to produce something. We produced goods and services that made our lives better and, therefore, we were needed. But if machines can produce goods and services more effectively than we can, are we still needed? Perhaps it’s time to re-define why we’re here.
Existential questions are messy and difficult to resolve. (Indeed, maybe it will take A.I. to figure out why we’re here). While we’re debating the issue, we have a narrower problem to solve: the issue of wealth distribution. Traditionally, we’ve used productivity as a rough guide for distributing wealth. The more you produce, the more wealth flows your way. But what if nobody produces anything? How will we parcel out the wealth?
This question has led to the development of a concept that’s now generally known as Universal Basic Income or U.B.I. The idea is simple – the government gives everybody money. It doesn’t depend on need or productivity or performance or fairness or justice. There’s no concept of receiving only what you deserve or what you’ve earned. The government just gives you money.
Is it fair? It depends on how you define fairness. Is it workable? It may be the only workable scheme in an age of abundance driven by intelligent machines. Could a worldwide government administer the scheme evenhandedly? If the government is composed of humans, then I doubt that the scheme would be fair and balanced. On the other hand, if the government were composed of A.I.s, then it might work just fine.

Human 2.0
When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.
Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.
I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.
Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?
As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.
But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.
So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.
When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.
But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.
If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.
If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.