Mashup thinking is an excellent way to develop new ideas and products. Rather than thinking outside the box (always difficult), you select ideas from multiple boxes and mash them together. Sometimes, nothing special happens. Sometimes, you get a genius idea.
Let’s mash up self-driving vehicles and drones to see what we get. First, let’s look at the current paradigms:
Self-driving vehicles (SDVs) include cars and trucks equipped with special sensors that can use existing public roadways to navigate autonomously to a given destination. The vehicles navigate a two-dimensional surface and should be able to get humans or packages from Point A to Point B more safely than human-driven vehicles. Individuals may not buy SDVs the way we have traditionally bought cars and trucks. We may simply call them when needed. Though the technology is rapidly improving, the legal and ethical systems still require a great deal of work.
Drones navigate three-dimensional space and are not autonomous. Rather, specially trained pilots fly them remotely. (They are often referred to as Remotely Piloted Aircraft or RPAs). They military uses drones for several missions, including surveillance, intelligence gathering, and to attack ground targets. To date, we haven’t heard of drones attacking airborne targets, but it’s certainly possible. Increasingly, businesses are considering drones for package delivery. The general paradigm is that a small drone will pick up a package from a warehouse (perhaps an airborne warehouse) and deliver it to a home or office or to troops in the field.
So, what do we get if we mash up self-driving vehicles and drones?
The first idea that comes to mind is an autonomous drone. Navigating 3D space is actually simpler than navigating 2D space – you can fly over or under an approaching object. (As a result, train traffic controllers have a more difficult job than air traffic controllers). Why would we want self-flying drones? Conceivably they would be more efficient, less costly, and safer than the human-driven equivalents. They also have a lot more space to operate in and don’t require a lot of asphalt.
We could also change the paradigm for what drones carry. Today, we think of them as carrying packages. Why not people, just like SDVs? It shouldn’t be terribly hard to design a drone that could comfortably carry a couple from their house to the theater and back. We’ll be able to whip out our smart phones, call Uber or Lyft, and have a drone pick us up. (I hope Lyft has trademarked the term Air Lyft).
What else? How about combining self-flying drones with self-driving vehicles? Today’s paradigm for drone deliveries is that an individual drone goes to a warehouse, picks up a package, and delivers it to an individual address. Even if the warehouse is airborne and mobile, that’s horribly inefficient. Instead, let’s try this: a self-driving truck picks up hundreds of packages to be delivered along a given route. The truck also has dozens of drones on it. As the truck passes near an address, a drone picks up the right package, and flies it to the doorstep. We could only do this, of course, if drones are autonomous. The task is too complicated for a human operator.
I could go on … but let’s also investigate the knock-on effects. If what I’ve described comes to pass, what else will happen? Here are some challenges that will probably come up:
These are intriguing predictions as well as troublesome challenges. But the thought process for generating these ideas is quite simple – you simply mash up good ideas from multiple boxes. You, too, can predict the future.
How can you tell when humans are lying? Their lips move.
It’s not necessarily the case that we lie with the intent to deceive or defraud. It’s just that many of the stories that come out of our mouths simply aren’t true. You can call it non-malicious fabricated storytelling. More generally, it’s called confabulation.
Neurologists originally thought confabulation resulted from mental deficits caused by injuries or strokes or dementia. People with such deficits might tell entirely cohesive stores that were simply not true. Some people might recall old memories and assume that they were fresh and current. Others might invent stories to explain their physical limitations like blindness or paralysis. In Oliver Sack’s well-known book, The Man Who Mistook His Wife for a Hat, the man in question mis-identified not only his wife but most everyone he met.
The more we study confabulation, the more we recognize that “normal” people do it as well. We all have an innate desire to connect the dots. We want to explain how things happen and why. We want to be able to say that X caused Y and – if it was true in the past – it should also be true in the future.
The more we can construct effective stories about the past, the more we believe we can control the future. This gives us a sense of confidence and security. But, of course, we can’t predict the future. (Experts are especially bad at it). I wonder if our inability to predict the future doesn’t result from confabulation. We confabulate the past and, therefore, the future.
Here’s a little thought experiment. If you see five similar objects arrayed left to right, which one do you prefer? In the absence of distinguishing information, people tend to pick the object on the right. Richard Nisbett and Timothy Wilson used this bias in an early study of “normal” confabulation. The study simulated a consumer survey and asked subjects to pick an item of apparel from a left-to-right array of four items that were essentially the same.
Nisbett and Wilson noted that, “… the right-most object in the array was heavily over chosen.” This was expected; it’s normal behavior. However, when the researchers asked people why they chose a particular object, they gave all kinds of answers that had nothing to do with position. In other words, they were confabulating even under perfectly normal conditions.
Similarly, I have a story that explains my career. I have an explanation for why I was promoted in a certain case and not in another. I can explain how I got from Job A to Job G in a very linear, logical fashion. But do I really know these things? Am I really sure what caused what? Do I really know why the boss made a given decision? No, I don’t. But I can make up a good story.
The only way to prove cause-and-effect is through an experiment. I would have to replicate myself and run the two versions of me in parallel. I obviously can’t do that, so I’ve made up a convenient story. It seems plausible; it works for me. But is it true? Even I don’t know.
Confabulation happens before and beneath our consciousness. Nisbett and Wilson cite George Miller: “It is the result of thinking, not the process of thinking, that appears spontaneously in our consciousness.” We can’t readily control confabulation because we don’t know it’s happening. We only see the results.
When you ask someone a question like, Why did you choose your career? (or your spouse, or your suit, etc.), you’ll likely get a plausible answer. But is it true? Even the speaker can’t know for sure. Can it help us understand the past and predict the future? Probably not.
For a good overview of confabulation, see Helen Phillips’ article in New Scientist.
I like to think about the future. So, in the past, I’ve written about scenario planning, prediction markets, resilience, and expert predictors. What have I learned in all this? Mainly, that experts regularly get it wrong. Also, that experts move in herds — one expert influences another and they begin to mutually reinforce each other. In the worst cases, we get manias, whether it’s tulip mania in 17th century Holland or mortgage mania in 21st century America. Paying ten times your annual income for a tulip bulb in 1637 is really not that different from Bank of America paying $4 billion for Countrywide.
I’ve also learned that you can (sometimes) make a lot of money by betting against the experts. The clearest description of “shorting” the experts is probably The Big Short by Michael Lewis.
I’m also forming the opinion that the reason we call people “experts” is because they study problems closely. They’re analysts; they study the details. Like college professors, they know a lot about a little. That may make them interesting dinner partners (or not) but does it make them better predictors of the future?
I’m thinking that the experts’ “close read” makes them worse predictors of the future, not better. Why? Because they go inside the frame of the problems. They pursue the internal logic of the story. Studying the internal logic of a situation can be useful but, as I pointed out in a recent article, it can also lead you astray. In addition to the internal logic, you need to step outside the frame and study the structure of the problem. If you stay inside the frame, you may well understand the internal dynamics of the issue. But, in many cases, the external dynamics are more important.
The case that I’ve been following is the cost of healthcare in the United States. The experts all seem to be pointing in the same direction: healthcare costs will continue to skyrocket and ultimately bankrupt the country. The experts are pointing in one direction so, as in the past, I think it’s useful to look in the other direction and predict that healthcare costs won’t climb as rapidly as in the past or may even go down.
Here are two interesting pieces of evidence that suggest that the experts may be wrong. The first is a report from the Altarum Institute which notes that 2012 represented the “…fourth consecutive year of record-low growth [in healthcare spending] compared to all previous years in the 50-plus years of official health spending data.” Granted, there’s a debate as to whether the slowing growth is caused by the recession or by structural changes but the experts (yikes!) suggest that at least some of the shift is structural.
The second piece of evidence is a report by Matthew Yglesias in Slate that documents the dramatic decline in spending for healthcare construction. Spending to construct new hospitals dropped precipitously in 2008 and has stayed low, even during the recovery. As Yglesias points out, construction spending is “the closest thing we have to a real-time forecast of what the future is going to look like.”
So, are the experts wrong? As Chou En Lai liked to say, it’s too soon to tell. But let’s keep an eye on them. Otherwise, we could be framed.
In the early 1990s, call centers were popping up around the United States like mushrooms on a dewy morning. Companies invested millions of dollars to improve customer service via well-trained, professional operators in automated centers. Several prognosticators suggested that the segment was growing so quickly that every man, woman, and child in the United States would be working in a call center by, oh say, 2010.
Of course, it didn’t happen. The Internet arrived and millions of customers chose to serve themselves. Telecommunication costs plummeted and many companies moved their call centers offshore. Call centers are still important but not nearly as pervasive in the United States as they were projected to be.
Now we’re faced with similar projections for health care costs. If current trends continue, prognosticators say, health care will consume an ever increasing portion of the American budget until everything simply falls apart. Given our experience with other “obvious trends”, I think it behooves us to ask the opposite question, what if health care costs go down?
Why would health care costs go down? Simply put — we may just cure a few diseases.
Why am I optimistic about potential cures? Because we’re making progress on many different fronts. For instance, what if obesity isn’t a social/cultural issue but a bacteriological issue? That’s the upshot of a recent article published in The ISME Journal. To quote: “Gram-negative opportunistic pathogens in the gut may be pivotal in obesity…” (For the original article, click here. For a summary in layman’s terms, click here). In other words, having the wrong bacteria in your gut could make you fat. Neutralizing those bacteria could slim down the whole country and reduce our health care costs dramatically.
And what about cancer? Apparently, we’re learning how to “persuade” cancer cells to kill themselves. I’ve spotted several articles on this — click here, here, here, here, and here for samples. Researchers hope that training cancer cells to commit suicide could cure many cancers in one fell swoop rather than trying to knock them off one at a time.
Of course, I’m not a medical doctor and it’s exceedingly hard to predict whether or when these findings might be transformed into real solutions. But I am old enough to know that “obvious predictions” often turn out to be dead wrong. In the late 1980s, experts predicted that our crime rate would spike to new highs in the 1990s. Instead, it did exactly the opposite. Similarly, we expected Japan to dominate the world economy. That didn’t happen either. We expected call centers to dominate the labor market. Instead, demand shifted to the Internet.
In the case of health care, it’s hard to make specific predictions. But a good strategist will always ask the “opposite” question. If the whole world is predicting that X will grow in significance, the strategist will always ask, “what if the reverse is true?” You may not be able to predict the future but you can certainly prepare for it.
In one of my classes at the University of Denver, I try to teach my students how to manage technologies that constantly morph and change. They’re unpredictable, they’re slippery, and managing them effectively can make the difference between success and failure.
The students, of course, want to predict the future so they can prepare for it. I try to convince them that predicting the future is impossible. But they’re young. They can explain the past, so why can’t they predict the future?
To help them prepare for the future — though not predict it — I often teach the techniques of scenario planning. You tell structured stories about the future and then work through them logically to understand which way the world might tilt. The technique has common building blocks, often referred to as PESTLE. Your stories need to incorporate political, economic, societal, technical, legal, and environmental frameworks. This helps ensure that you don’t overlook anything.
I’ve used scenario planning a number of times and it has always helped me think through situations in creative ways – so it seems reasonable to teach it. To prepare for a recent class, I re-read The Art of the Long View by Peter Schwartz. I found it on one of my dustier bookshelves and discovered it was the 1991 edition. While I remembered many of the main points, I was surprised to find a long chapter titled, “The World in 2005: Three Scenarios”. Here was a chance to see how well the inventor of scenario planning could prepare us for the future.
In sum, I was quite disappointed. The main error was that each scenario vastly overestimated the importance of Japan on the world stage in 2005. In a way, it all makes sense. The author was writing in 1991, when we all believed that Japan might just surpass every other economy on earth. Of course, he would assume that Japan would still dominate in 2005. Of course, he was wrong.
So what can we learn from this? Two things, I think:
I’ll continue to teach scenario planning in the future. After all, it’s a good template for thinking and planning. I’ll also be able to provide a very good example of how it can all go wrong.