Strategy. Innovation. Brand.

Travis

RAND’s Truth Decay

Truth Decay in action.

A few days ago, the RAND Corporation — one of America’s oldest think tanks — published a report titled, Truth Decay: A Threat To Policymaking and Democracy. Though I’ve read it only once and probably don’t yet grasp all its nuances, I think it’s very relevant to our world today and want to bring it to your attention.

You can find the full report here. Here are some of the highlights. The items in italics are direct quotes. The items not in italics are my comments and opinions.

What Is Truth Decay?

Heightened disagreement about facts and analytical interpretations of data — we used to disagree about opinions. Now we increasingly disagree about facts.

The blurred line between opinion and fact — we used to separate “news” from “editorial”. It was easy to tell which was which. Today, the two are commonly mixed together.

Increased volume and influence of opinion and personal experience across the communication landscape — our channels of mass communication used to be dominated by factual reporting with some clearly labeled opinion pieces mixed in. Today the reverse seems to be true.

Diminished trust in formerly respected institutions as sources of factual information — we used to have Walter Cronkite. Now we don’t.

Why Has The Truth Decayed?

Characteristics of human information processing, such as cognitive biases — these are the same biases that we’ve been studying this quarter.

Changes in the information system, such as the rise of 24-hour news coverage, social media, and dissemination of disinformation and misleading or biased information — we used to sip information slowly. Now we’re swamped by it.

Competing demands on the educational system that challenge its ability to keep pace with information system changes — is our educational system teaching people what to think or how to think?

Polarization in politics, society, and the economy — we’ve sorted ourselves out so that we only have to meet and interact with people — and ideas — that we agree with.

It’s a bracing read and I recommend it to you.

Egotism and The Awe Drought

Awesome.

When did you last get goose bumps as you contemplated something magnificent? When did you last feel like a small thread in an eternal fabric? When were you last awestruck?

I ask my students these questions and most everyone can remember feeling awestruck. My students get a bit dreamy when they describe the event: the vastness of a starry night or the power of a great thunderstorm. It makes them feel small. It fills them with wonder. They’re awestruck.

But not recently. The events they describe took place long ago. My students (who are mainly in their mid-30s) can reach back years to recall an event. But I can’t think of a singe example of a student who was awestruck just last week. It was always the distant past.

I’m starting to believe that we’re in an awe drought. Though we say “awesome” frequently, we don’t experience true skin-tingling awe very often. Perhaps we’ve explained the world too thoroughly. There aren’t many mysteries left. Or perhaps we’re just too busy. We don’t spend much time contemplating the infinite. We’d rather do e-mail.

My subjective experience has some academic backing as well. Paul Piff and Dachner Keltner make the case that “that our culture today is awe-deprived.” (Click here). They also point out that people who experience awe are more generous to strangers and more willing to sacrifice for others. An awe drought has consequences.

An awe drought might also explain the growing egotism in today’s world. Awe is the natural enemy of egotism. When you’re awestruck, you don’t feel like the center of the universe. Quite the opposite – you feel like a tiny speck of dust in a vast enterprise.

Awe holds egotism in check. If awe is declining, then egotism should be booming. And indeed, it is. A number of academic studies that trace everything from song lyrics to State-of-the-Union addresses suggest that egotism is growing – at least in America and probably elsewhere as well. (Click here, here, here, and here for examples).

What causes what? Does a lack of awe spur greater egotism? Or does growing egotism stifle awe? Or is there some third variable in play? It’s hard to sort out and the answer may not be clear-cut one way or the other. As a practical matter, however, awe is easier to experiment with than is egotism. It’s hard to imagine that we could just tell people to stop being egotistic and get any meaningful results. On the other hand, a campaign to stimulate awe-inspiring experiences might just work. If we can put a dent in the awe drought, we might be able to sort out the impact on egotism.

So, let’s seek out awe-inspiring experiences and let’s encourage our friends to do the same. Let’s see what happens. I know that I, for one, would love to say “awesome” and actually mean it.

Digital Taylorism and Dumb Humans

I’m your new manager.

Years ago, I heard Jaron Lanier give a lecture that included a brief summary of the Turing Test. Lanier suggested that there are two ways that machines might pass Turing’s test of artificial intelligence. On the one hand, machines could get smarter. On the other hand, humans could get dumber.

I wonder if humans-getting-dumber is where we’re headed with digital Taylorism.

Frederick Taylor, who died just over 100 years ago, was the father of scientific management or what we would now call industrial engineering. Working in various machine shops in Philadelphia in the late 19th century, Taylor studied the problems of both human and machine productivity. In Peter Drucker’s words, Taylor “was the first man in recorded history who deemed work deserving of systematic observation and study.” His followers included both Henry Ford and Vladimir Lenin.

The promise of the original Taylorism was increased productivity and lower unit costs. The gains resulted from fundamental changes in human work habits. Taylor-trained managers, for instance, broke complex tasks into much simpler sub-tasks that could more easily be taught, measured, and monitored. As a result, productivity rose dramatically but work was also dehumanized.

According to numerous commentators, we are today seeing a resurgence of Taylorism in the digital workplace. With digital tools and the Internet of Things, we can more carefully and closely monitor individual workers. In some cases, we no longer need humans to manage humans. Machines can apply scientific management to workers better than humans can. (Click here and here for more detail).

Digital Taylorism has spawned an array of devices to measure ever-more-granular work in ever-more-granular detail. Sociometric badges are “…wearable sensing devices designed to collect data on face-to-face communication and interaction in real time.” They could deliver “…a dramatic improvement in our understanding of human behavior at unprecedented levels of granularity.”

More recently, Amazon patented a wristband that can monitor a warehouse worker’s very movement. The wristband can track where a worker’s hands are in relation to warehouse bins to monitor productivity. It can also use haptic feedback – basically buzzes and vibrations – to alert workers when they make a mistake. (Click here, here, and here for more detail).

Could digital Taylorism fulfill Lanier’s suggestion that machines will match human intelligence not because they get smarter but because humans get dumber? Could it make humans dumber?

It’s hard to say but there is some evidence that we did indeed get dumber the last time we fundamentally altered our work habits. Roughly 10,000 years ago, human brains began shrinking. Prior to that time, the average human brain was roughly 1,500 cubic centimeters. Since then, our brains have shrunk to about 1,350 cubic centimeters. As one observer points out, the amount of brain matter we’ve lost is roughly the size of tennis ball.

What happened? A leading hypothesis suggests that our brains began shrinking when we transitioned from hunter-gatherer societies to agricultural societies. Hunter-gatherers live by their wits and need big brains. Farmers don’t. As our work changed, so did our brains.

Could digital Taylorism lead to a new wave of brain shrinkage? It’s possible. In a previous article, I asked what should we do when robots replace us? Perhaps a better question is what should we do when robots manage us?

What Happens When You Anesthetize A Plant?

Think I could do this if I weren’t conscious?

Some of the best questions about the world around us are those that seem obvious when they’re asked … but anything but obvious until they’re asked.

I recently saw a spate of articles about the use of anesthetics and plants. We’ve known – at least since 1846 – that compounds like ether can anesthetize humans. Breathe a little in and you lose your sensibility to the world around you while maintaining core functions like breathing, heartbeat, and so forth. Simply put, you lose consciousness.

But what would happen if you apply anesthetics to plants, which don’t have hearts or lungs or spinal cords? Would plants also lose consciousness? That would, of course, imply that they are conscious.

It’s an interesting question and — being interested in the history of questions – I wondered if it had ever been asked before. As it happens, a French scientist named Claude Bernard was one of the first to experiment with anesthetics and plants. In 1878, he published Leçons sur les phénomènes de la vie, communs aux animaux et aux végétaux. (You can find the original text here. I discovered it via a 2014 article found here).

Through a series of ingenious experiments, Bernard found that anesthesias such as ether affect plants in very specific ways. For instance:

  • Movement – some plants will recoil when their leaf is touched. Under anesthesia, plants lose this ability to move but regain it when the anesthesia is removed. Other movements, however, such as those triggered by light, are not affected. The plant can still move but it doesn’t respond to physical stimuli.
  • Germination – under anesthesia, the germination process is interrupted but restarts when the anesthesia is removed.
  • Photosynthesis – anesthesia interrupts the photosynthesis process without interrupting the respiration process. Again, photosynthesis restarts when the anesthesia is removed.

In December 2017, researchers published an article in the Annals of Botany that effectively updates Bernard’s experiments. The researchers came to similar conclusions as did Bernard while using a variety of different anesthetics “that have no structural similarities.” They conclude that plants can be an effective research model “to study general questions related to anaesthesia, as well as to serve as a suitable test system for human anaesthesia.”

I find all this fascinating but I also wonder why it has taken almost 140 years to update Bernard’s original research. I think it has to do with our assumptions. We assume that animals – especially animals with spinal cords – are so fundamentally different from plants that we don’t think about comparing them. Carl Linnaeus laid down the rules: there are two kingdoms – the animal kingdom and the vegetable kingdom. Never the tween shall meet. Because of the strict delineation between the two, we haven’t been asking obvious comparative questions, perhaps to our detriment.

It occurs to me that the delineation between animal and vegetable is similar to Descartes’s delineation between mind and body. For Descartes, mind and body were two different universes. Only recently have we discovered how entwined they are. We’re now asking useful and insightful questions about how one influences the other.

I wonder what other useful questions we’re not asking because of our assumptions. Perhaps it’s time to create an encyclopedia of assumptions and begin testing them one by one.

I also think we need to start asking different questions. So many of our questions are about the differences between things. What’s the difference between mind and body? Between men and women? Between different ethnic groups? Claude Bernard, on the other hand, asked about the similarities between animals and plants. Perhaps we should follow his lead. If we asked about our similarities, we might discover that we’re connected in much more profound ways than we imagine.

(The New York Times has a good article on the recent plant experiments, which includes time-lapse photography of plants under the effect of anesthetics. You can find it here.)

Jewelry and Perverse Incentives

Careful! It’s a perverse incentive!

With a perverse incentive, a company incents its employees to behave in ways that are contrary to the company’s interests. The company, in other words, pays employees to do things that reward the employee but prevent the company from reaching its stated goals. (See here and here for more detail).

Why would a company do that? Sometimes the stated goals of the company are not its actual goals. For instance, the company may say that it aims to increase customer satisfaction. That’s nice window dressing but the real goal may be to “make the numbers”. So, the company may incent its sale force to act in ways that make the numbers even if such behavior also reduces customer satisfaction. In this example, studying the perverse incentive can help us understand what the company’s real goals are. This seemed to be the case at Wells Fargo, for instance.

In other cases, one business process conflicts with another. Perhaps each process is perfectly fine when running in isolation. When they run in tandem, however, they create perverse incentives. A good example comes from Signet Jewelers, the owner of several retail jewelry chains, including Jared’s, Kay Jewelers, and Zales. (I discovered this case in the business pages of the New York Times. Click here for the original article.)

The Signet situation involves two different business processes: sales and financial credit. By combining the two, Signet created a perverse incentive. Each business process works fine in and of itself. It’s the combination that spawns confusion. Here are the two processes:

  1. You’re a banker who makes loans to individuals and companies. Your goal is to make profitable loans that are repaid in a timely manner. Your compensation is based on this. If you make a lot of good loans, your compensation goes up. If you make risky loans that aren’t paid back, your compensation goes down. Your incentives line up nicely with the bank’s goals.
  2. You’re a manager at a retail jewelry chain. You aim to sell more jewelry than you did last month or quarter or year. If you do, your compensation goes up. If you don’t, it goes down. Again, your incentives line up nicely with your company’s goals: to sell more jewelry.

Now let’s change the scenario. You’re now the manager of a retail jewelry store that also offers loans to its customers to enable them to buy more jewelry. Your compensation is based on how much jewelry you sell.

It sounds like a good idea. So, what’s wrong with this picture? To sell more jewelry, you have a strong incentive to give loans to non-credit-worthy individuals. You make the sale, but a relatively high proportion of the loans you make go bad and are not repaid. The company either writes off the loans or spends a lot of money with debt collectors trying to redeem them. The net result is often a negative: you sell more but also lose more.

The Signet example is just one of many. Once you’re familiar with the concept of perverse incentives, you can find them most everywhere, including the morning paper.

 

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives