Thirty years ago, I was a product manager for a startup company that created high-performance, multiprocessing minicomputers. Powerful, scalable, and based on open standards, they offered exceptional price/performance. In 1988, Electronics magazine gave its Computer of The Year award to the flagship model
As we introduced the system, we described the “ideal” customer: a medium to large organization that used Unix-based systems, and ran large database applications, especially Oracle applications. We trained the sales force, produced some modest direct mail campaigns, and launched.
Then reality set in. In the first three months, we sold about 45 machines to some 30 different organizations. We gathered data about our new customers and looked for correlations that would help us target prospective customers more precisely. We found nothing — no patterns in terms of size, SIC code, geography, application, and so on. The data were almost random.
We were stumped. So, we decided to interview the key decision maker in each account. We created an interview guide and fanned out to visit customers
After our visits, we dug into our findings. Again, we found no useful patterns in the demographic data. Then we started describing the key decision makers. Who were they? Why did they decide on us?
Most of the decision makers were men in their early thirties who had recently been promoted to a position typically described as VP, Data Processing. They replaced an older person who had held the same position for more than ten years. One of our marketers had a flash of insight: “It’s almost like the decision maker is saying, ‘I’m the new sheriff in town. We’re going to do things my way. This is one of my first big decisions … and we’re going to buy a hot new machine from a startup company. I’m going to make my mark.’”
It turned out to be a very accurate description. Nominally, our customers were buying our machines to run large applications. But psychology was perhaps more important. We estimated that roughly 60% of our customers fit the “new sheriff” profile.
We decided to market specifically to new sheriffs. We trawled through organization profiles and identified those that had a new VP of Data Processing. We sent each new sheriff a fairly intense mail campaign coupled with calls by our local sales rep. The campaign succeeded rather well. From the time of the launch, we grew to $300 million in revenue in about two years.
I didn’t know it at the time, but we were practicing an art that today is called, the “jobs to be done theory of innovation.” (Click here for a good introduction). Developed by Clayton Christensen and his colleagues, the basic idea is that demographic information doesn’t reveal why a person chooses to purchase a new product or service. If we misunderstand the job to be done, our innovations will miss the mark.
Our startup company, for instance, positioned around big machines for big databases. We wanted to offer ever bigger, pricier machines. The new sheriff profile, however, changed our thinking. To get in the door, we needed to make it easy for the new sheriff to buy something on his own authority. So, we introduced an entry-level machine priced just below a typical VP signature limit.
Similarly, think about why men buy pajamas. We might think they simply want to stay warm. But men in America typically don’t buy pajamas until they have a daughter who is three years old. Their motivation is not to stay warm but to preserve their modesty. If we misunderstand that, we’ll produce far too many cozy, warm, flannel pajamas that men will never buy.
In my experience, good marketers and salespeople use the jobs-to-be-done method naturally and intuitively. They’re good observers and naturally ask a basic question: why do people buy these products? They dig into the data but, more importantly, they observe how people behave and ask insightful questions. The management guru, Ted Levitt,was a natural at this. He noted that people don’t buy gasoline for their cars. Rather, they buy the right to continue driving.
The jobs-to-be-done theory suggests that the key to innovation is sociology, not technology. Do you want your company to be more innovative? It’s time to add more marketers and salespeople – and maybe a sociologist and anthropologist – to your development team.
When I started this website, I didn’t need to ask anyone’s permission. An enabling platform was already in place. The platform consisted of the Internet, the World Wide Web, and many pieces of open source software. All I needed to do was download the latest version of WordPress, rent some space on a web server, and I was off to the races.
The key element, of course, is an open, accessible platform that facilitates innovation. In my case, the platform is a collection of Internet-based technologies. Other permissionless innovation platforms include the interstate highway system, the human genome project, and the public school system. The trick is to provide a platform that anyone can use without prior permission.
I thought about this as I was reading up on Bitcoin. I’ve written about Bitcoin as a currency (here and here). One of my intrepid readers, John Ball, suggested that I’m probably missing the essence of what Bitcoin is all about. John, who understands the technology better than I do, suggests that Bitcoin is a permissionless innovation platform. Here’s what he has to say:
“… Bitcoin has little to do with currency, and everything to do with a protocol and trusted ledger. I suspect we may see multiple ‘digital currencies’ just as we see multiple email systems. However, the concept of peer to peer transactions freed from the tolls of large intermediaries like Visa, Western Union, and First Data, is here to stay and will continue to grow.” (John’s entire comment is here. Just scroll down.)
With Visa, Western Union, or First Data, we have to ask permission and pay fees to use their proprietary system. Similarly, in the early days of software, developers had to ask permission and pay fees to run their programs on proprietary computers. With the advent of open operating systems like Unix, software developers no longer had to ask permission. They simply wrote to the open software platform. As a result, software blossomed and hardware became a commodity.
As John suggests, the same thing seems to be happening in the world of crypto-currencies. John predicts that we’ll see multiple digital currencies. He’s right. In fact, it’s already happening. You can use another permissionless innovation platform called Google to find: Peercoin, Dogecoin, Namecoin, and others. As far as I can tell, all of these are built on the Bitcoin platform. If I had better programming skills, I could create my own currency. Let’s call it Travis Tender.
While Bitcoin (the currency) has certainly had some PR mishaps lately, Bitcoin (the platform) is just starting to blossom. In fact, we might say that Bitcoin is to currency as Unix is to proprietary computing. If so, we’re about to see a wave of innovation that will make the original Bitcoin seem quaint.
I’ve enjoyed and admired Apple products since I got my first Macintosh in the mid-1980s. Apple products are intuitive; they’re designed for people rather than technologists. I think of the company as innovative and dynamic.
On the other hand, I often hear that our technology and pharmacy companies would be much more innovative if the government would just get out of the way. Critics claim that governments are meddlesome nuisances.
Not so, argues Mariana Mazzucato in her new book, The Entrepreneurial State. A professor at the University of Sussex, Mazzucato documents the government-funded research that enabled many of the great leaps forward in information technology and pharmaceuticals.
Mazzucato argues that the state is the true innovator, willing to invest in high-risk endeavors that can affect all aspects of society. By contrast, private companies are relatively non-innovative; they simply take the results of governmental research and commercialize them. In Mazzucato’s view, the government bears the risk while private companies take the profits.
In an extended example, Mazzucato analyzes Apple’s iPod, iPad, and iPhone and the technologies they incorporate. She identifies a dozen embedded technologies and traces the origin of each. In each case, the technology originated in government (or government-funded) projects.
Mazzucato documents government investments from around the world. For instance, we wouldn’t have the iPod if not for German and French investments in giant magnetoresistance (GMR) that enables tiny disk drives. In the United States, the multi-touch screen was developed at the University of Delaware (my alma mater) with funding from the NSF and the CIA.
Mazzucato argues that we do ourselves a disservice by denigrating governments as bumbling meddlers. Private companies invest for the short-term and are relatively risk averse. Governments can look much farther into the future and can accept much less sanguine risk/reward ratios. As I’ve argued before, governments can create fundamental platforms that many entrepreneurs can capitalize on.
Mazzucato struggles with but doesn’t quite resolve the fundamental issue of fairness. Should Apple pay the government back for all the technologies it has capitalized on? One view is that Apple already reimburses the government through taxes. However, the recent ruckus about Apple’s ability to avoid taxes suggests that the reimbursement may not be full or fair. Perhaps Mazzucato can develop a mechanism that will help reimburse governments adequately for fundamental breakthroughs.
As Mazzucato points out, we tend to tell only half the story. We point to the successes of private industry and the failures of the government. If half the story becomes the whole story, we will underfund government research and drive away talented researchers. We won’t take the big risks but only the incremental, short-term risks that private capital can afford. For all of us who love the iPhone, that would be a shame.
(Click here to watch Professor Mazzucato give a TEDx talk).
My father, who was the first in our family to go to college, went to a land-grant university (Texas A&M). My sister went to a land-grant university (Clemson). I went to a land-grant university (Delaware). My wife went to a land-grant university (Purdue). My wife’s parents went to a land-grant university. (Wisconsin)
Abraham Lincoln set up the land-grant system through the Morrill Act of 1862. The federal government granted land to each state. The state used the land to set up a college to teach the practical arts, including agriculture, engineering, and military science.
The system worked. Land-grant colleges became social elevators that allowed lower-and middle-class kids to pursue higher education affordably. They also became engines of innovation, fueling an innovation boom that catapulted the United Sates to leadership positions in multiple industries in the late 19th century. We’re still riding the echo of that boom. I’ve often wondered about the return on the land-grant investment. The economic value created by the system must be orders of magnitude higher than the original cost.
The genius of the system is that it’s a platform, not a solution. For instance, Lincoln didn’t identify the inefficient harvesting of cotton as a national problem and jump to the conclusion that the government should invest in the cotton gin. Instead, he created a platform that allowed many people to pursue an education, investigate problems, and develop solutions on their own.
I bring this up because we seem confused about what role the government should play in stimulating innovation. I hear it in my IT/innovation classes all the time. Some students argue that government should get out of the way and let private industry solve every problem “efficiently”. Others argue that government should have a role but they have a difficult time describing it.
Ultimately, I think it’s fairly simple. The government should invest in platforms, not solutions. The land-grant system allowed millions of people — including me — to take something from America and then turn around and make something for America. (It’s not true that we’re either makers or takers. We’re usually both.)
In the recent past, the best example of platforms that stimulate innovation are probably the Internet and the human genome project. The massive brain mapping project — the Human Connectome — that President Obama recently announced could become the next great platform. On the other hand, the government investment in the solar panel manufacturer, Solyndra, was solution picking rather than platform building. It didn’t work so well.
So, I’m all for government investment in platforms that can stimulate innovation. By the way, I don’t claim that this is an original idea of mine. Steven Johnson makes much the same point in his book, Where Good Ideas Come From. But I do think it’s an idea that needs to be popularized. That’s why I’m writing about it. I hope you will, too. In the meantime, I’ll give credit where it’s due by saying, “Thank you Mr. Lincoln for helping my family get an education.”