Strategy. Innovation. Brand.

Featured

This week’s featured posts.

1 2 3 28

Chocolate or Sex?

Such difficult choices.

Such difficult choices.

A few days ago I published a brief article, Chocolate Brain, which discussed the cognitive benefits of eating chocolate. Bottom line: people who eat chocolate (like my sister) have better cognition than people who don’t. As always, there are some caveats, but it seems that good cognition and chocolate go hand in hand.

I was headed to the chocolate store when I was stopped in my tracks by a newly published article in the journal, Age and Ageing. The title, “Sex on the brain! Associations between sexual activity and cognitive function in older age” pretty much explains it all. (Click here for the full text).

The two studies – chocolate versus sex – are remarkably parallel. Both use data collected over the years through longitudinal studies. The chocolate study looked at almost 1,000 Americans who have been studied since 1975 in the Maine-Syracuse Longitudinal Study. The sex study looked at data from almost 7,000 people who have participated in the English Longitudinal Study of Aging (ELSA).

Both longitudinal studies gather data at periodic intervals; both studies are now on wave 6. The chocolate study included people aged 23 to 98. The sex study looked only at older people, aged 50 to 89.

Both studies also used standard measures of cognition. The chocolate study used six standard measures of cognition. The sex study used two: “…number sequencing, which broadly relates to executive function, and word recall, which broadly relates to memory.”

Both studies looked at the target variable – chocolate or sex – in binary fashion. Either you ate chocolate or you didn’t; either you had sex – in the last 12 months – or you didn’t.

The results of the sex test differed by gender. Men who were sexually active had higher scores on both number sequencing and word recall tests. Sexually active women had higher scores on word recall but not number sequencing. Though the differences were statistically significant, the “…magnitude of the differences in scores was small, although this is in line with general findings in the literature.”

As with the chocolate study, the sex study establishes an association but not a cause-and-effect relationship. The researchers, led by Hayley Wright, note that the association between sex and improved cognition holds, even “… after adjusting for confounding variables such as quality of life, loneliness, depression, and physical activity.”

So the association is real but we haven’t established what causes what. Perhaps sexual activity in older people improves cognition. Or maybe older people with good cognition are more inclined to have sex. Indeed, two other research papers cited by Wright et. al, studied attitudes toward sex among older people in Padua, Italy and seemed to suggest that good cognition increased sexual interest rather than vice versa. (Click here and here). Still, Wright and her colleagues might use a statistical tool from the chocolate study. If cognition leads to sex (as opposed to the other way round), people having more sex today should have had higher cognition scores in earlier waves of the longitudinal study than did people who aren’t having as much sex today.

So, we need more research. I’m especially interested in establishing whether there are any interactive effects. Let’s assume for a moment that sexual activity improves cognition. Let’s assume the same for chocolate consumption. Does that imply that combining sex and chocolate leads to even better cognition? Could this be a situation in which 1 + 1 = 3? Raise your hand if you’d like to volunteer for the study.

Blockchain Beyond Bitcoin

Blockchain - It's not just for Bitcoin anymore.

Blockchain – It’s not just for Bitcoin anymore.

In 1979, Paper Mate introduced the world’s first ballpoint pen with erasable ink. Technology analysts considered it an important breakthrough and the news made headlines around the country. Many of us thought, “Wow! Finally I can write in ink and then erase it. How cool is that?” After a few moments of reflection, we had a second thought, “Why would I ever want to do that?”

Before erasable ink, we thought of ink’s permanence as a drawback and a disadvantage. After erasable ink appeared, we realized that ink’s permanence was actually its primary benefit. Write it once and you know it will never go away. If you might want to erase something, use a pencil.

In an odd way, permanence may also be the primary benefit of the blockchain technology that underlies Bitcoin. We think of databases as interactive, up-to-date records of the world as it is. The closer to real-time, the better. If you want to know what’s happening right this millisecond, high-speed databases will tell you.

But what if you want to know what happened some time ago? And what if you want assurances that the information you retrieve is tamper-proof and immutable? In other words, what if you want the electronic equivalent of permanent ink?

That’s exactly what blockchains on distributed ledgers give you. You can’t change the blockchain unless you can decrypt it – and that’s very difficult. Even if you can decrypt it on one network node, many original copies exist on other nodes. It’s fairly easy to restore the status quo ante. You can be very confident that the information you retrieve is unchanged from the original. It’s an immutable, permanent record.

The blockchain/ledger technology allows Bitcoin to keep a permanent record of all transactions. That’s important if you want to create a trusted financial system. But why stop at financial transactions? Are there other transactions that might benefit from permanent, tamper-proof records?

Indeed, there are. Here are a few that are in production or beta today:

  • Ascribe – allows artists to “…lock in attribution [and] securely share and trace where your digital work spreads.”
  • Storj – a potential weak point of cloud storage is that, ultimately, your data is assigned to one server. What if that server fails or is corrupted or hacked? To improve security and privacy, Storj breaks your data into blockchains and stores it on multiple servers.
  • BitHealth – while Storj can store most any kind of data, BitHealth focuses on healthcare data. It claims to provide highly secure, uninterruptible, tamper-proof health data around the world.
  • Everledger – where did your fancy diamond come from? How did it get here? Where is it insured? For how much? Everledger keeps a permanent, immutable “ledger for diamond certification and related transaction history.”
  • Proof Of Existence or Bitproof — you want to prove that you had an idea at a certain date (preferably before anyone else). You could file a patent application. But that’s expensive, time-consuming, and public. Or you could register your document in the Proof of Existence or Bitproof blockchain databases.
  • Warranteer – you buy a product that comes with a warranty, which is described in a document. The product goes bad at approximately the same time that the document goes missing. Why not save the warranty in Warranteer’s blockchain, cloud-based database?

I could go on and on. (If you want to dig deeper, click here, here, and here). While Bitcoin popularized the technology, blockchain extends far beyond the financial world. Indeed blockchain may disintermediate and disrupt supply chains around the world. If so, the world will get much more efficient. Is that what we want?

The Mother Of All Fallacies

An old script, it is.

An old script, it is.

How are Fox News and Michael Moore alike?

They both use the same script.

Michael Moore comes at issues from the left. Fox News comes from the right. Though they come from different points on the political spectrum, they tell the same story.

In rhetoric, it’s called the Good versus Evil narrative. It’s very simple. On one side we have good people. On the other side, we have evil people. There’s nothing in between. The evil people are cheating or robbing or killing or screwing the good people. The world would be a better place if we could only eliminate or neuter or negate or kill the evil people.

We’ve been using the Good versus Evil narrative since we began telling stories. Egyptian and Mayan hieroglyphics follow the script. So do many of the stories in the Bible. So do Republicans. So do Democrats. So do I, for that matter. It’s the mother of all fallacies.

The narrative inflames the passions and dulls the senses. It makes us angry. It makes us feel that outrage is righteous and proper. The narrative clouds our thinking. Indeed, it aims to stop us from thinking altogether. How can we think when evil is abroad? We need to act. We can think later.

I became sensitized to the Good versus Evil narrative when I lived in Latin America. I met more than a few people who are convinced that America is the embodiment of evil. They see it as a country filled with greedy, immoral thieves and murderers who are sucking the blood of the innocent and good people of Latin America. I had a difficult time squaring this with my own experiences. Perhaps the narrative is wrong.

Rhetoric teaches us to be suspicious when we hear Good versus Evil stories. The word is a messy, chaotic, and random place. Actions are nuanced and ambiguous. People do good things for bad reasons and bad things for good reasons. A simple narrative can’t possibly capture all the nuances and uncertainties of the real world. Indeed, the Good versus Evil doesn’t even try. It aims to tell us what to think and ensure that we never, ever think for ourselves.

When Jimmy Carter was elected president, John Wayne attended his inaugural even though he had supported Carter’s opponent. Wayne gave a gracious speech. “Mr. President”, he said, “you know that I’m a member of the loyal opposition. But let’s remember that the accent is on ‘loyal’”. How I would love to hear anyone say that today. It’s the antithesis of Good versus Evil.

Voltaire wrote that, “Anyone who has the power to make you believe absurdities has the power to make you commit injustices.” The Good versus Evil narrative is absurd. It doesn’t explain the world; it inflames the world. Ultimately, it can make injustices seem acceptable.

The next time you hear a Good versus Evil story, grab your thinking cap. You’re going to need it.

(By the way, Tyler Cowen has a terrific TED talk on this topic that helped crystallize my thinking. You can find it here.)

Ebola and Availability Cascades

We can't see it so it must be everywhere!

We can’t see it so it must be everywhere!

Which causes more deaths: strokes or accidents?

The way you consider this question speaks volumes about how humans think. When we don’t have data at our fingertips (i.e., most of the time), we make estimates. We do so by answering a question – but not the question we’re asked. Instead, we answer an easier question.

In fact, we make it personal and ask a question like this:

How easy is it for me to retrieve memories of people who died of strokes compared to memories of people who died by accidents?

Our logic is simple: if it’s easy to remember, there must be a lot of it. If it’s hard to remember, there must be less of it.

So, most people say that accidents cause more deaths than strokes. Actually, that’s dead wrong. As Daniel Kahneman points out, strokes cause twice as many deaths as all accidents combined.

Why would we guess wrong? Because accidents are more memorable than strokes. If you read this morning’s paper, you probably read about several accidental deaths. Can you recall reading about any deaths by stroke? Even if you read all the obituaries, it’s unlikely.

This is typically known as the availability bias – the memories are easily available to you. You can retrieve them easily and, therefore, you overestimate their frequency. Thus, we overestimate the frequency of violent crime, terrorist attacks, and government stupidity. We read about these things regularly so we assume that they’re common, everyday occurrences.

We all suffer from the availability bias. But when we suffer from it simultaneously and together, it can become an availability cascade – a form of mass hysteria. Here’s how it works. (Timur Kuran and Cass Sunstein coined the term availability cascade. I’m using Daniel Kahneman’s summary).

As Kahneman writes, an “… availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor incident and lead up to public panic and large-scale government action.” Something goes wrong and the media reports it. It’s not an isolated incident; it could happen again. Perhaps it could affect a lot of people. Perhaps it’s an invisible killer whose effects are not evident for years. Perhaps you already have it. How would one know? Or perhaps it’s a gruesome killer that causes great suffering. Perhaps it’s not clear how one gets it. How can we protect ourselves?

Initially, the story is about the incident. But then it morphs into a meta-story. It’s about angry people who are demanding action; they’re marching in the streets and protesting in front of the White House. It’s about fear and loathing. Then experts get involved. But, of course, multiple experts never agree on anything. There are discrepancies in the stories they tell. Perhaps they don’t know what’s really going on. Perhaps they’re hiding something. Perhaps it’s a conspiracy. Perhaps we’re all going to die.

A story like this can spin out of control in a hurry. It goes viral. Since we hear about it every day, it’s easily available to our memories. Since it’s available, we assume that it’s very probable. As Kahneman points out, “…the response of the political system is guided by the intensity of public sentiment.”

Think it can’t happen in our age of instant communications? Go back and read the stories about ebola in America. It’s a classic availability cascade. Chris Christie, the governor of New Jersey, reacted quickly — not because he needed to but because of the intensity of public sentiment. Our 24-hour news cycle needs something awful to happen at least once a day. So availability cascades aren’t going to go away. They’ll just happen faster.

Jill Disrupts Clayton (Sort Of)

I've been disrupted!

I’ve been disrupted!

My career has been a steady diet of disruption.

Three times, disruptive innovations rocked the companies I worked for. First, the PC destroyed the word processing industry (which had destroyed the typewriter industry). Second, client/server applications disrupted host-centric applications. Third, cloud-based applications disrupted client/server applications.

Twice, my companies disrupted other companies. First, RISC processors disrupted CISC processors. Second, voice applications disrupted traditional call centers.

In 1997, a Harvard professor named Clayton Christensen (pictured) took examples like these and fashioned a theory of disruptive innovation. In The Innovator’s Dilemma, he explained how it works: Your company is doing just fine and understands exactly what customers need. You focus on offering customers more of what they want. Then an alternative comes along that offers less of what customers want but is easier to use, more convenient, and less costly, etc. You dismiss it as a toy. It eats your lunch.

The disruptive innovation typically offers less functionality than the product it disrupts. Early mobile phones offered worse voice quality than landlines. Early digital cameras produced worse pictures than film. But, they all offered something else that appealed to consumers: mobility, simplicity, immediate gratification, multi-functionality, lower cost, and so on. They were good enough on the traditional metrics and offered something new and appealing that tipped the balance.

My early experiences with disruption – before Christensen wrote his book — were especially painful. We didn’t understand what was happening to us. Why would customers choose an inferior product? We read books like Extraordinary Popular Delusions and The Madness of Crowds to try to understand. Was modern technology really nothing more than an updated version of tulip mania?

After Christensen’s book came out, we wised up a bit and learned how to defend against disruptions. It’s not easy but, at the very least, we have a theory. Still, disruptions show no sign of abating. Lyft and Uber are disrupting traditional taxi services. AirBnB is disrupting hotels. And MOOCs may be disrupting higher education (or maybe not).

Such disruption happens often enough that it seems almost intuitive to me. So, I was surprised when another Harvard professor, Jill Lepore, published a “take-down” article on disruptive innovation in a recent edition of The New Yorker.

Lepore’s article, “The Disruption Machine: What The Gospel of Innovation Gets Wrong”, appears to pick apart the foundation of Christensen’s work. Some of the examples from 1997 seem less prescient now. Some companies that were disrupted in the 90s have recovered since. (Perhaps we did get smarter). Disruptive companies, on the other hand, have not necessarily thrived. (Perhaps they, too, were disrupted).

Lepore points out that Christensen started a stock fund based on his theories in March 2000. Less than a year later, it was “quietly liquidated.” Unfortunately, she doesn’t mention that March 2000 was the very moment that the Internet bubble burst. Christensen may have had a good theory but he had terrible timing.

But what really irks Lepore is given away in her subtitle. It’s the idea that Christensen’s work has become “gospel”. People accept it on faith and try to explain everything with it. Consultants have take Christensen’s ideas to the far corners of the world. (Full disclosure: I do a bit of this myself). In all the fuss, Lepore worries that disruptive innovation has not been properly criticized. It hasn’t been picked at in the same way as, say, Darwinism.

Lepore may be right but it doesn’t mean that Christensen is wrong. In the business world, we sometimes take ideas too literally and extend them too far. As I began my career, Peters and Waterman’s In Search of Excellence was almost gospel. We readers probably fell in love a little too fast. Yet Peters and Waterman had – and still have — some real wisdom to offer. (See The Hype Cycle for how this works).

I’ve read all but one of Christensens’s books and I don’t see any evidence that he promotes his work as a be-all, end-all grand Theory of Everything. He’s made careful observations and identified patterns that occur regularly. Is it a religion? No. But Christensen offers a good explanation of how an important part of the world works. I know. I’ve been there.

 

 

 

 

1 2 3 28
My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives