Strategy. Innovation. Brand.

Concept Creep and Pessimism

What’s your definition of blue?

My friend, Andy, once taught in the Semester at Sea program. The program has an ocean-going ship and brings undergraduates together for a semester of sea travel, classes, and port calls. Andy told me that he was fascinated watching these youngsters come together and form in-groups and out-groups. The cliques were fairly stable while the ship was at sea but more fluid when it was in port.

Andy told me, for instance, that some of the women described some of the men as “Ship cute, shore ugly.” The very concept of “cute” was flexible and depended entirely on context. When at sea, a limited supply of men caused the “cute” definition to expand. In port, with more men available, the definition of cute became more stringent.

We usually think of concepts as more-or-less fixed. They’re unlike other processes that expand over time. The military, for instance is familiar with “mission creep” – a mission may start with small and well-defined objectives but they often grow over time. Similarly, software developers understand “feature creep” – new features are added as the software is developed.  But do concepts creep? The Semester at Sea example suggests that they do, depending on prevalence.

This was also the finding of a research paper published in a recent issue of Sciencemagazine. (Click here). Led by David Levari, the researchers showed that “… people often respond to decreases in the prevalence of a stimulus by expanding their concept of it.” In the Semester at Sea example, as the stimulus (men) decreases, the concept of cute expands. According to Levari, et. al., this is a common phenomenon and not just related to hormonal youngsters isolated on a ship.

The researchers started with a very neutral stimulus – the color of dots. They presented 1,000 dots ranging in color from purple to blue and asked participants to identify the blue ones. They repeated the trial several hundred times. Participants were remarkably consistent in each trial. Dots identified as blue in the first trials were still identified as blue in the last trials.

The researchers then repeated the trials while reducing the number of blue dots. Would participants in the second set of trails – with decreased stimulus — expand their definition of “blue” and identify dots as blue that they had originally identified as purple? Indeed, they would. In fact, the number of purple-to-blue “crossover” dots was remarkably consistent through numerous trials.

The researchers also varied the instructions for the comparisons. In the first study, participants were told that the number of blue dots “might change” in the second pass. In a second study, participants were told that the number of blue dots would “definitely decrease.” In a third study, participants were instructed to “be consistent” and were offered monetary rewards for doing so. In some studies the number of blue dots declined gradually. In others, the blue dots decreased abruptly. These procedural changes had virtually no impact on the results. In all cases, declining numbers of blue dots resulted in an expanded definition of “blue”.

Does concept creep extend beyond dots? The researchers did similar trials with 800 images of human faces that had been rated on a continuum from “very threatening” to “not very threatening.” The results were essentially the same as the dot studies. When the researchers reduced the number of threatening faces, participants expanded their definition of “threatening.”

All these tests used visual stimuli. Does concept creep also apply to nonvisual stimuli? To test this, the researchers asked participants to evaluate whether 240 research proposals were ethical or not. The results were essentially the same. When the participants saw many unethical proposals, their definition of ethics was fairly stringent. When they saw fewer unethical proposals, their definition expanded.

It seems then that “prevalence-induced concept change” – as the researchers label it – is probably common in human behavior. Could this help explain some of the pessimism in today’s world? For example, numerous sources verify that the crime rate in the United States has declined over the past two decade. (See herehere, and here, for example). Yet many people believe that the crime rate has soared. Could concept creep be part of the problem? It certainly seems likely.

Yet again, our perception of reality differs from actual reality. Like other cognitive biases, concept creep distorts our perception in predictable ways. As the number of stimuli – from cute to blue to ethical – goes down, we expand our definition of what the concept actually means. As “bad news” decreases, we expand our definition of what is “bad”. No wonder we’re pessimistic.

Bitcoin, Blockchain, and Five Years

What’s next?

I first wrote about Bitcoin on this website five years ago today. (Click here). I decided not to buy any at the time because the price had surged to well over one hundred dollars! Clearly it was a bubble. If only I had known that the price would peak at $18,000 a few years later. (Today, the price is about $6,800).

So what’s happened over the past five years? Let’s look at Bitcoin’s benefits and then investigate some of the ways that it has changed our world.

Bitcoin is based on a blockchain stored in multiple locations. This gives it two major advantages: it can’t be erased and can’t be tampered with. Simply put, it’s like writing checks in ink rather than in pencil, using paper that can’t be destroyed. A blockchain can record transactions and ensure that they will always be available as a matter of public record. Bitcoin uses this feature to buy and sell things. Each transaction is recorded forever, meaning that you can’t spend the same Bitcoin more than once.

Bitcoins can also reduce inflation because they can’t be printed at a government’s whim. Instead, they’re “mined” through complex mathematical calculations. The process gradually grows the supply of coins. The money supply grows in predictable ways. This appeals to anyone who worries that governments will artificially inflate their national currencies.

Bitcoin is also anonymous – just like cash. Unlike cash, however, it’s not physical. It can easily be moved around the world as electronic blips. That makes transactions convenient and inexpensive and could conceivably cut out banks as middlemen. This makes Bitcoin attractive to many groups, especially criminals.

So, what’s happened? First, the idea of the blockchain has spread. There’s no reason to limit the blockchain to currency transactions. We can store anything in blockchain and ensure that it never disappears. In other words, we believe that it is more trustworthy than government or financial entities.

As Tim Wu writes, we are undergoing, “… a monumental transfer of social trust: away from human institutions backed by governments and to systems reliant on well-tested computer code.” Wu notes that we already trust computers to fly airplanes, assist in surgery, and guide us to our destination. Why not financial systems as well? A well-organized cryptocurrency could become the de facto standard global currency and eliminate the need for many banking services.

But we don’t need to limit the blockchain to financial transactions. Any record that must be inviolate can potentially benefit from blockchain technology. Some examples:

  • The Peruvian economist, Hernando de Soto, has suggested that we can combat poverty by storing land ownership records in blockchain systems. Land ownership disputes in Latin America can last for centuries. Blockchain could simplify the process and ensure that those who hold title to land can’t be cheated out of it.
  • De Soto’s proposal eliminates the government as the arbiter of land titles. This is part of a broader trend to disintermediate governments. Why should governments be information czars? Better to store our records in blockchain. This could include land titles, personal identification, health records (including our DNA), the provenance of art works, stock ownership, international fund transfers, and self-enforcing contracts.
  • Tim Wu suggests that we’re moving our trust from governments to code. But we’re only part way there. In our first step away from governments, we put our trust in giant Internet companies like Facebook and Google. We’re now discovering that these entities are no more trustworthy than governments. Indeed, they may be less trustworthy. What’s the next step? Many suggest that it’s the blockchain.
  • Meanwhile, afraid of being disintermediated, governments are starting to plan cryptocurrencies of their own. Russia has proposed a digital currency with several former soviet socialist republics. Sweden and China are both interested in there own cryptocurrencies and have established sturdy groups. But the first government out of the gate seems to be Venezuela, driven by a financial crisis. Venezuela’s printed currency, the Bolívar, suffers from inflation rates around 4,000%. So the government has just announced a new cryptocurrency called the Petro, based on the nation’s oil revenues. The government is essentially saying, “We’re incompetent to print paper money but you can trust the Petro because it’s based on code, not a bumbling bureaucracy. We humans can’t interfere with it.” Will it work? Stay tuned.

Of course, we can also use blockchains for less noble pursuits. The blockchain can store any information, including pornography. That’s a problem but it’s the same problem that was faced by myriad new technologies, including VCRs and the Internet itself. Criminals can also use cryptocurrencies for ransomware attacks, and to traffic in contraband or avoid taxes. We can ameliorate these problems but we probably can’t eliminate them. Still, the advantages of the technology seem much greater than the disadvantages.

So … what happens over the next five years? The New York Times reports that venture capitalists poured more than half a billion dollars into blockchain projects in the first three months of this year. So, I expect we’ll see a shakeout at the platform level over the next five years. Today, there are many ways to implement blockchain. It reminds me of the personal computing market in, say, 1985 – too many vendors selling too many technologies through too many channels. I expect the market will consolidate around two or perhaps three major platforms. Who will win? Perhaps IBM. Perhaps R3. Perhaps Ethereum. Perhaps Multichain. Rather than buying Bitcoin, I’d suggest that you study the platforms and place your bets accordingly.

In the meantime, we need to ask ourselves a simple question: Are we really willing to forego our trust in traditional institutions and put it all into computer code?

Delayed Intuition – How To Hire Better

On a scale of 1 – 5, is he technically proficient?

Daniel Kahneman is rightly respected for discovering and documenting any number of irrational human behaviors. Prospect theory – developed by Kahneman and his colleague, Amos Tversky – has led to profound new insights in how we think, behave, and spend our money. Indeed, there’s a straight line from Kahneman and Tversky to the new discipline called Behavioral Economics.

In my humble opinion, however, one of Kahneman’s innovations has been overlooked. The innovation doesn’t have an agreed-upon name so I’m proposing that we call it the Kahneman Interview Technique or KIT.

The idea behind KIT is fairly simple. We all know about the confirmation bias – the tendency to attend to information that confirms what we already believe and to ignore information that doesn’t. Kahneman’s insight is that confirmation bias distorts job interviews.

Here’s how it works. When we meet a candidate for a job, we immediately form an impression. The distortion occurs because this first impression colors the rest of the interview. Our intuition might tell us, for instance, that the candidate is action-oriented. For the rest of the interview, we attend to clues that confirm this intuition and ignore those that don’t. Ultimately, we base our evaluation on our initial impressions and intuition, which may be sketchy at best. The result – as Google found – is that there is no relationship between an interviewer’s evaluation and a candidate’s actual performance.

To remove the distortion of our confirmation bias, KIT asks us to delay our intuition. How can we delay intuition? By focusing first on facts and figures. For any job, there are prerequisites for success that we can measure by asking factual questions. For instance, a salesperson might need to be: 1) well spoken; 2) observant; 3) technically proficient, and so on. An executive might need to be: 1) a critical thinker; 2) a good strategist; 3) a good talent finder, etc.

Before the interview, we prepare factual questions that probe these prerequisites. We begin the interview with facts and develop a score for each prerequisite – typically on a simple scale like 1 – 5. The idea is not to record what the interviewer thinks but rather to record what the candidate has actually done. This portion of the interview is based on facts, not perceptions.

Once we have a score for each dimension, we can take the interview in more qualitative directions. We can ask broader questions about the candidate’s worldview and philosophy. We can invite our intuition to enter the process. At the end of the process, Kahneman suggests that the interviewer close her eyes, reflect for a moment, and answer the question, How well would this candidate do in this particular job?

Kahneman and other researchers have found that the factual scores are much better predictors of success than traditional interviews. Interestingly, the concluding global evaluation is also a strong predictor, especially when compared with “first impression” predictions. In other words, delayed intuition is better at predicting job success than immediate intuition. It’s a good idea to keep in mind the next time you hire someone.

I first learned about the Kahneman Interview Technique several years ago when I read Kahneman’s book, Thinking Fast And Slow. But the book is filled with so many good ideas that I forgot about the interviews. I was reminded of them recently when I listened to the 100th episode of the podcast, Hidden Brain, which features an interview with Kahneman. This article draws on both sources.

Why Study Critical Thinking?

Friend or foe?

People often ask me why they should take a class in critical thinking. Their typical refrain is, “I already know how to think.” I find that the best answer is a story about the mistakes we often make.

So I offer up the following example, drawn from recent news, about very smart people who missed a critical clue because they were not thinking critically.

The story is about the conventional wisdom surrounding Alzheimer’s. We’ve known for years that people who have Alzheimer’s also have higher than normal deposits of beta amyloid plaques in their brains. These plaques build up over time and interfere with memory and cognitive processes.

The conventional wisdom holds that beta amyloid plaques are an aberration. The brain has essentially gone haywire and starts to attack itself. It’s a mistake. A key research question has been: how do we prevent this mistake from happening? It’s a difficult question to answer because we have no idea what triggered the mistake.

But recent research, led by Rudolph Tanzi and Robert Moir, considers the opposite question. What if the buildup of beta amyloid plaques is not a mistake? What if it serves some useful purpose? (Click here and here for background articles).

Pursuing this line of reasoning, Tanzi and Moir discovered the beta amyloid is actually an antimicrobial substance. It has a beneficial purpose: to attack bacteria and viruses and smother them. It’s not a mistake; it’s a defense mechanism.

Other Alzheimer’s researchers have described themselves as “gobsmacked” and “surprised” by the discovery. One said, “I never thought about it as a possibility.”

A student of critical thinking might ask, Why didn’t they think about this sooner? A key tenet of critical thinking is that one should always ask the opposite question. If conventional wisdom holds that X is true, a critical thinker would automatically ask, Is it possible that the opposite of X is true in some way?

Asking the opposite question is a simple way to identify, clarify, and check our assumptions. When the conventional wisdom is correct, it leads to a dead end. But, occasionally, asking the opposite question can lead to a Nobel Prize. Consider the case of Barry Marshall.

A doctor in Perth, Australia, Marshall was concerned about his patients’ stomach ulcers. Conventional wisdom held that bacteria couldn’t possibly live in the gastric juices of the human gut. So bacteria couldn’t possibly cause ulcers. More likely, stress and anxiety were the culprits. But Marshall asked the opposite question and discovered the bacteria now known a H. Pylori. Stress doesn’t cause ulcer, bacteria do. For asking the opposite question — and answering it — Marshall won the Nobel Prize in Medicine in 2005.

The discipline of critical thinking gives us a structure and method – almost a checklist – for how to think through complex problems. We should always ask the opposite question. We should be aware of common fallacies and cognitive biases. We should understand the basics of logic and argumentation. We should ask simple, blunt questions. We should check our egos at the door. If we do all this – and more – we tilt the odds in our favor. We prepare our minds systematically and open them to new possibilities – perhaps even the possibility of curing Alzheimer’s. That’s a good reason to study critical thinking.

RAND’s Truth Decay

Truth Decay in action.

A few days ago, the RAND Corporation — one of America’s oldest think tanks — published a report titled, Truth Decay: A Threat To Policymaking and Democracy. Though I’ve read it only once and probably don’t yet grasp all its nuances, I think it’s very relevant to our world today and want to bring it to your attention.

You can find the full report here. Here are some of the highlights. The items in italics are direct quotes. The items not in italics are my comments and opinions.

What Is Truth Decay?

Heightened disagreement about facts and analytical interpretations of data — we used to disagree about opinions. Now we increasingly disagree about facts.

The blurred line between opinion and fact — we used to separate “news” from “editorial”. It was easy to tell which was which. Today, the two are commonly mixed together.

Increased volume and influence of opinion and personal experience across the communication landscape — our channels of mass communication used to be dominated by factual reporting with some clearly labeled opinion pieces mixed in. Today the reverse seems to be true.

Diminished trust in formerly respected institutions as sources of factual information — we used to have Walter Cronkite. Now we don’t.

Why Has The Truth Decayed?

Characteristics of human information processing, such as cognitive biases — these are the same biases that we’ve been studying this quarter.

Changes in the information system, such as the rise of 24-hour news coverage, social media, and dissemination of disinformation and misleading or biased information — we used to sip information slowly. Now we’re swamped by it.

Competing demands on the educational system that challenge its ability to keep pace with information system changes — is our educational system teaching people what to think or how to think?

Polarization in politics, society, and the economy — we’ve sorted ourselves out so that we only have to meet and interact with people — and ideas — that we agree with.

It’s a bracing read and I recommend it to you.

1 2 3 127

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup

Archives