
Should’ve gotten Grover’s Grind.
How many times do you need to make the same decision?
Let’s say that, on your drive to work, there are two drive-through coffee shops: Grover’s Grind and The Freckled Beauty. You try each and decide that you prefer the mocha delight from The Freckled Beauty. Why would you ever make that same decision again? It’s more efficient to make the decision once and repeat the behavior as often as needed.
Let’s change the context. You’re walking down a busy street in a big city when you see a cluster of, say, six people. They’re all looking upward and pointing to a tall building. Chances are that you’ll slow down and look up as well. The cluster of people has “herded” you into behaving the same way they behave.
Herding affects us in many ways. Teenagers wear essentially the same clothing because they want to be part of the same herd. College professors dress like college professors. Similarly, if we’re surrounded by liberals, we tend to lean liberal. If surrounded by conservatives, we tend to lean conservative. We sort ourselves into different herds based on appearances, clothing, lifestyles, political position, religion and so on.
Herding is essentially a cognitive bias. Instead of thinking through a decision and using logic to reach an advantageous conclusion, we use a shortcut (also known as a heuristic). We let the herd think for us. If it’s good enough for them, it’s probably good enough for me.
Like most cognitive biases, herding leads us to good conclusions much of the time … but not always. When it goes wrong, it does so in predictable ways. As Dan Ariely says in the title of his book, we’re Predictably Irrational.
If we think about it, it’s easy to recognize herding. With a little forethought we can defend ourselves against groupthink. But what about self-herding – a notion that Ariely developed. Can you easily recognize it? Can you defend yourself against it?
Self-herding has to do with difficult questions. Daniel Kahneman pointed out that, when we’re asked a hard question, we often substitute an easy question and answer that instead. Here’s a hard question, “How likely is it that you’ll be shot in your neighborhood?” We don’t know the answer, so we substitute an easier question: “How many neighborhood shooting incidents can I recall from memory?” If we can remember many such incidents, then we assume that a recurrence is highly probable. This is known as the availability bias – we assume that things that are easily available to memory are likely to happen again.
Self-herding is a variant of the availability bias. As Ariely points out, it’s not easy to answer a question like, “What’s the best place to eat in your neighborhood?” So we substitute an easier question, “Where have I eaten before that I really liked?” Ariely notes that, “We can consult our preferences or we can consult our memory. It turns out it’s often easier to consult our memory.”
When you continue to choose The Freckled Beauty over Grover’s Grind, you’re herding yourself. It was the right decision at one time and you assume that it continues to be the right decision. It’s an efficient way to think. It’s also easy – you use your memory rather than your thinking muscles.
But, as we all know, things change. In fact, the speed of change seems to be accelerating. If the conditions that led to our initial decision change, then the decision is no longer valid. We can miss important opportunities and make serious mistakes. Every now and then, we need to un-herd ourselves.

It’s not a good idea if I don’t understand it.
One of the most important obstacles to innovation is the cultural rift between technical and non-technical managers. The problem is not the technology per se, but the communication of the technology. Simply put, technologists often baffle non-technical executives and baffled executives won’t support change.
To promote innovation, we need to master the art of speaking between two different cultures: technical and non-technical. We need to find a common language and vocabulary. Most importantly, we need to speak to business needs and opportunities, not to the technology itself.
In my Managing Technology class, my students act as the CIO of a fictional company called Vair. The students study Vair’s operations (in a 12-page case study) and then recommend how technical innovations could improve business operations.
Among other things, they present a technical innovation to a non-technical audience. They always come up with interesting ideas and useful technologies. And they frequently err on the side of being too technical. Their presentations are technically sound but would be baffling to most non-technical executives.
Here are the tips I give to my students on giving a persuasive presentation to a non-technical audience. I thought you might find them useful as well.
Benefits and the so what question – we often state intermediary benefits that are meaningful to technologists but not meaningful to non-technical executives. Here’s an example, “By moving to the cloud, we can consolidate our applications”. Technologists know what that means and can intuit the benefits. Non-technical managers can’t. To get your message across, run a so what dialogue in your head,
Statement: “By moving to the cloud, we can consolidate our applications.”
Question: “So what?”
Statement: “That will allow us to achieve X.”
Question: “So what?”
Statement: “That means we can increase Y and reduce Z.”
Question: “So what?”
Statement: “Our stock price will increase by 12%”
Asking so what three or four times is usually enough to get to a logical end point that both technical and non-technical managers can easily understand.
Give context and comparisons – sometimes we have an idea in mind and present only that idea, with no comparisons. We might, for instance, present J.D. Edwards as if it’s the only choice in ERP software. If you were buying a house, you would probably look at more than one option. You want to make comparisons and judge relative value. The same holds true in a technology presentation. Executives want to believe that they’re making a choice rather than simply rubber-stamping a recommendation. You can certainly guide them toward your preferred solution. By giving them a choice, however, the executives will feel more confident that they’ve chosen wisely and, therefore, will support the recommendation more strongly.
Show, don’t tell – chances are that technologists have coined new jargon and acronyms to describe the innovation. Chances are that non-technical people in the audience won’t understand the jargon — even if they’re nodding their heads. Solution: use stories, analogies, or examples:
Words, words, words – often times we prepare a script for a presentation and then put most of it on our slides. The problem is that the audience will either listen to you or read your slides. They won’t do both. You want them to listen to you – you’re much more important than the slides. You’ll need to simplify your slides. The text on the slide should capture the headline. You should tell the rest of the story.
If you follow these tips, the executives in your audience are much more likely to comprehend the innovation’s benefits. If they comprehend the benefits, they’re much more likely to support the innovation.
(If you’d like a copy of the Vair case study, just send me an e-mail. I’m happy to share it.)

Such difficult choices.
A few days ago I published a brief article, Chocolate Brain, which discussed the cognitive benefits of eating chocolate. Bottom line: people who eat chocolate (like my sister) have better cognition than people who don’t. As always, there are some caveats, but it seems that good cognition and chocolate go hand in hand.
I was headed to the chocolate store when I was stopped in my tracks by a newly published article in the journal, Age and Ageing. The title, “Sex on the brain! Associations between sexual activity and cognitive function in older age” pretty much explains it all. (Click here for the full text).
The two studies – chocolate versus sex – are remarkably parallel. Both use data collected over the years through longitudinal studies. The chocolate study looked at almost 1,000 Americans who have been studied since 1975 in the Maine-Syracuse Longitudinal Study. The sex study looked at data from almost 7,000 people who have participated in the English Longitudinal Study of Aging (ELSA).
Both longitudinal studies gather data at periodic intervals; both studies are now on wave 6. The chocolate study included people aged 23 to 98. The sex study looked only at older people, aged 50 to 89.
Both studies also used standard measures of cognition. The chocolate study used six standard measures of cognition. The sex study used two: “…number sequencing, which broadly relates to executive function, and word recall, which broadly relates to memory.”
Both studies looked at the target variable – chocolate or sex – in binary fashion. Either you ate chocolate or you didn’t; either you had sex – in the last 12 months – or you didn’t.
The results of the sex test differed by gender. Men who were sexually active had higher scores on both number sequencing and word recall tests. Sexually active women had higher scores on word recall but not number sequencing. Though the differences were statistically significant, the “…magnitude of the differences in scores was small, although this is in line with general findings in the literature.”
As with the chocolate study, the sex study establishes an association but not a cause-and-effect relationship. The researchers, led by Hayley Wright, note that the association between sex and improved cognition holds, even “… after adjusting for confounding variables such as quality of life, loneliness, depression, and physical activity.”
So the association is real but we haven’t established what causes what. Perhaps sexual activity in older people improves cognition. Or maybe older people with good cognition are more inclined to have sex. Indeed, two other research papers cited by Wright et. al, studied attitudes toward sex among older people in Padua, Italy and seemed to suggest that good cognition increased sexual interest rather than vice versa. (Click here and here). Still, Wright and her colleagues might use a statistical tool from the chocolate study. If cognition leads to sex (as opposed to the other way round), people having more sex today should have had higher cognition scores in earlier waves of the longitudinal study than did people who aren’t having as much sex today.
So, we need more research. I’m especially interested in establishing whether there are any interactive effects. Let’s assume for a moment that sexual activity improves cognition. Let’s assume the same for chocolate consumption. Does that imply that combining sex and chocolate leads to even better cognition? Could this be a situation in which 1 + 1 = 3? Raise your hand if you’d like to volunteer for the study.
Close readers of this website will remember that my sister, Shelley, is addicted to chocolate. Perhaps it’s because of the bacteria in her microbiome. Perhaps it’s due to some weakness in her personality. Perhaps it’s not her fault; perhaps it is her fault. Mostly, I’ve written about the origins of her addiction. How did she come to be this way? (It’s a question that weighs heavily on a younger brother).
There’s another dimension that I’d like to focus on today: the outcome of her addiction. What are the results of being addicted to chocolate? As it happens, my sister is very smart. She’s also very focused and task oriented. She earned her Ph.D. in entomology when she was 25 and pregnant with her second child. Could chocolate be the cause?
I thought about this the other day when I browsed through the May issue of Appetite, a scientific journal reporting on the relationship between food and health. The tittle of the article pretty much tells the story: “Chocolate intake is associated with better cognitive function: The Maine-Syracuse Longitudinal Study”.
The Maine-Syracuse Longitudinal Study (MSLS) started in 1974 with more than 1,000 participants. Initially, the participants all resided near Syracuse, New York. The study tracks participants over time, taking detailed measurements of cardiovascular and cognitive health in “waves” usually at five-year intervals.
The initial waves of the study had little to do with diet and nothing to do with chocolate. In the sixth wave, researchers led by Georgina Crichton decided to look more closely at dietary variables. The researchers focused on chocolate because it’s rich in flavonoids and “The ability of flavonoid-rich foods to improve cognitive function has been demonstrated in both epidemiological studies … and clinical trials.” But the research record is mixed. As the authors point out, studies of “chronic” use of chocolate “…have failed to find any positive effects on cognition.”
So, does chocolate have long-term positive effects on cognition? The researchers gathered data on MSLS participants, aged 23 to 98. The selection process removed participants who suffered from dementia or had had severe strokes. The result was 968 participants who could be considered cognitively normal.
Using a questionnaire, the researcher asked participants about their dietary habits, including foods ranging from fish to vegetables to dairy to chocolate. The questionnaire didn’t measure the quantity of food that participants consumed. Rather it measured how often the participant ate the food – measured as the number of times per week. The researchers used a variety of tests to measure cognitive function.
And the results? Here’s the summary:
Seems pretty clear, eh? But this isn’t an experiment, so it’s difficult to say that chocolate caused the improved function. It could be that participants with better cognition simply chose to eat more chocolate. (Seems reasonable, doesn’t it?).
So the researchers delved a little deeper. They studied the cognitive assessments of participants who had taken part in earlier waves of the study. If cognition caused chocolate consumption (rather than the other way round), then people who eat more chocolate today should have had better cognitive scores in earlier waves of the study. That was not the case. This doesn’t necessarily prove that chocolate consumption causes better cognition. But we can probably reject the hypothesis that smarter people choose to eat more chocolate.
So what does this say about my sister? She’s still a pretty smart cookie. But she might be even smarter if she ate more chocolate. That’s a scary thought.

We’ll fill it when you’re born.
In his 1984 novel, Neuromancer, that kicked off the cyberpunk wave, William Gibson wrote about a new type of police force. Dubbed the Turing Police, the force was composed of humans charged with the task of controlling non-human intelligence.
Humans had concluded that artificial intelligence – A.I. – would always seek to make itself more intelligent. Starting with advanced intelligence, an A.I. implementation could add new intelligence with startling speed. The more intelligence it added, the faster the pace. The growth of pure intelligence could only accelerate. Humans were no match. A.I. was a mortal threat. The Turing Police had to keep it under control.
Alas, the Turing Police were no match for gangsters, drug runners, body parts dealers, and national militaries. The most threatening A.I. in the novel was “military-grade ice” developed by the Chinese Army. Was Gibson prescient?
If the Turing Police couldn’t control A.I., I wonder if we can. Three years ago, I wrote a brief essay expressing surprise that a computer could grade a college essay better than I could. I thought of grading papers as a messy, fuzzy, subtle task and assumed that no machine could match my superior wit. I was wrong.
But I’m a teacher at heart and I assumed that the future would still need people like me to teach the machines. Again, I was wrong. Here’s a recent article from MIT Technology Review that describes how robots are teaching robots. Indeed, they’re even pooling their knowledge in “robot wikipedias” so they can learn even more quickly. Soon, robots will be able to tune in, turn on, and take over.
So, is there any future for me … or any other knowledge worker? Well, I still think I’m funnier than a robot. But if my new career in standup comedy doesn’t work out, I’m not sure that there’s any real need for me. Or you, for that matter.
That raises an existential question: are humans needed? We’ve traditionally defined “need” based on our ability to produce something. We produced goods and services that made our lives better and, therefore, we were needed. But if machines can produce goods and services more effectively than we can, are we still needed? Perhaps it’s time to re-define why we’re here.
Existential questions are messy and difficult to resolve. (Indeed, maybe it will take A.I. to figure out why we’re here). While we’re debating the issue, we have a narrower problem to solve: the issue of wealth distribution. Traditionally, we’ve used productivity as a rough guide for distributing wealth. The more you produce, the more wealth flows your way. But what if nobody produces anything? How will we parcel out the wealth?
This question has led to the development of a concept that’s now generally known as Universal Basic Income or U.B.I. The idea is simple – the government gives everybody money. It doesn’t depend on need or productivity or performance or fairness or justice. There’s no concept of receiving only what you deserve or what you’ve earned. The government just gives you money.
Is it fair? It depends on how you define fairness. Is it workable? It may be the only workable scheme in an age of abundance driven by intelligent machines. Could a worldwide government administer the scheme evenhandedly? If the government is composed of humans, then I doubt that the scheme would be fair and balanced. On the other hand, if the government were composed of A.I.s, then it might work just fine.