Strategy. Innovation. Brand.

disruptive innovation

Disrupting the Lawyers

Filling out unemployment forms.

Filling out unemployment forms.

Last week, I wrote about the process of disintermediation and how it will disrupt banks and bankers. By encrypting transactions and distributing them across a peer-to-peer network, we will no longer need banks to serve as trusted intermediaries in financial transactions. We can eliminate the middleman.

Can we eliminate lawyers as well? You bethca.

We have lawyers for the same reasons that we have bankers: we don’t trust each other. I don’t trust that you’ll pay me; I want your bank to guarantee it. Similarly, I don’t trust that you’ll honor our contract; I want a lawyer to enforce it.

But what if we could create a contract that didn’t need a lawyer to interpret and execute it? We could eliminate the lawyer as an intermediary. That’s exactly the idea behind smart contracts (also known as self-enforcing or self-executing contracts).

First proposed by Nick Szabo back in 1993, smart contracts use software to ensure that agreements are properly executed. Not surprisingly, smart contracts use blockchain technologies spread across peer-to-peer networks. If you think that sounds like Bitcoin, you’re right. Indeed many people think that Szabo created Bitcoin using the pseudonym Satoshi Nakamoto.

So how do smart contracts work? Here’s how Josh Blatchford explains it:

“… imagine a red-widget factory receives an order from a new customer to produce 100 of a new type of blue widget. This requires the factory to invest in a new machine and they will only recoup this investment if the customer follows through on their order.

Instead of trusting the customer or hiring an expensive lawyer, the company could create a smart property with a self-executing contract. Such a contract might look like this: For every blue widget delivered, transfer price per item from the customer’s bank account to the factory’s bank account. Not only does this eliminate the need for a deposit or escrow — which places trust in a third party — the customer is protected from the factory under-delivering.”

Smart contracts, in other words, precisely define the conditions of an agreement — not unlike dumb contracts. They also execute the terms of the contract by automatically (and irrevocably) transferring assets as the contract is fulfilled.

Blatchford wrote his description in VentureBeat – an online magazine that helps venture capitalists identify and invest in leading edge technologies. This suggests that the money to fund smart contract platforms is already flowing.

Indeed, the first smart contract platform – Ethereum – launched in July 2015. Ethereum’s website describes the endeavor as “… a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.”

Ethereum seems to be essentially a developer’s platform today. Developers can use the platform to develop applications that eliminate the need for trusted (human) intermediaries. Should lawyers be worried? Not yet. But soon.

Innovation Assimilation

Next?

Next?

In 1983, when I was a product manager at NBI, we were second only to Wang in the word processing and office automation market. Then along came the personal computer and disrupted Wang, NBI, CPT and every other vendor of dedicated word processing equipment.

I’ve written about this previously as an example of disruptive innovation. But I could also describe it as assimilative innovation. NBI’s products did one thing – word processing — and did it very well. The PC, on the other hand, was multifunctional. It could do many things, including word processing (although not as well as NBI). The multifunction device assimilated and displaced the single function device.

We’ve seen many examples of assimilative innovation. When I bicycled across America, I bought a near-top-of-the-line digital camera to record my adventures. It took great pictures. It still takes great pictures. But I hardly ever use it. My smartphone does a lot of things, including taking great pictures. Though my smartphone’s pictures are not as good as my camera’s, they’re good enough. Additionally, the smartphone is a lot more convenient.

Years ago, the automotive industry produced an odd example of innovation assimilation. American cars had a big hole in the dashboard (fascia) where you could slot in a radio and cassette player. You were supposed to buy the audio equipment from the car manufacturer but consumers quickly figured out that they could get it cheaper from after-market vendors.

So, did the auto manufacturers lower their prices? No way, Instead, they re-designed their dashboards so that the audio equipment came in several pieces that an after-market vendor couldn’t easily mimic. In other words, the manufacturers tired to assimilate the competition.

In this case, it didn’t work. The after-market vendors sued, claiming illegal restraint of trade. The courts agreed and ordered the manufacturers to go back to the big hole in the dashboard. I suspect this was a precedent when Nestlé sued to stop third-party vendors from selling coffee capsules for the popular Nespresso coffee maker. Nestlé lost. The courts ruled that Nestlé had created a platform that allowed for permisionless innovation.

What will be assimilated next? I suspect it’s going to be fitness bands. I wear the Jawbone band on my wrist to keep track of my activity and calories. It’s pretty good and seems to compete well with three or four other fitness bands on the market. The new Apple Watch, however, appears to have similar (or even better) functionality built into it. The Apple Watch is, of course, multi-functional. If history is any guide, the multi-functional and convenient device will displace the single purpose device, even if it doesn’t offer better functionality.

What’s the moral? When you buy a single function device, be aware that it’s likely to be assimilated into a multi-function device in the future. That’s not a bad thing as long as you’re aware of the risk.

Jill Disrupts Clayton (Sort Of)

I've been disrupted!

I’ve been disrupted!

My career has been a steady diet of disruption.

Three times, disruptive innovations rocked the companies I worked for. First, the PC destroyed the word processing industry (which had destroyed the typewriter industry). Second, client/server applications disrupted host-centric applications. Third, cloud-based applications disrupted client/server applications.

Twice, my companies disrupted other companies. First, RISC processors disrupted CISC processors. Second, voice applications disrupted traditional call centers.

In 1997, a Harvard professor named Clayton Christensen (pictured) took examples like these and fashioned a theory of disruptive innovation. In The Innovator’s Dilemma, he explained how it works: Your company is doing just fine and understands exactly what customers need. You focus on offering customers more of what they want. Then an alternative comes along that offers less of what customers want but is easier to use, more convenient, and less costly, etc. You dismiss it as a toy. It eats your lunch.

The disruptive innovation typically offers less functionality than the product it disrupts. Early mobile phones offered worse voice quality than landlines. Early digital cameras produced worse pictures than film. But, they all offered something else that appealed to consumers: mobility, simplicity, immediate gratification, multi-functionality, lower cost, and so on. They were good enough on the traditional metrics and offered something new and appealing that tipped the balance.

My early experiences with disruption – before Christensen wrote his book — were especially painful. We didn’t understand what was happening to us. Why would customers choose an inferior product? We read books like Extraordinary Popular Delusions and The Madness of Crowds to try to understand. Was modern technology really nothing more than an updated version of tulip mania?

After Christensen’s book came out, we wised up a bit and learned how to defend against disruptions. It’s not easy but, at the very least, we have a theory. Still, disruptions show no sign of abating. Lyft and Uber are disrupting traditional taxi services. AirBnB is disrupting hotels. And MOOCs may be disrupting higher education (or maybe not).

Such disruption happens often enough that it seems almost intuitive to me. So, I was surprised when another Harvard professor, Jill Lepore, published a “take-down” article on disruptive innovation in a recent edition of The New Yorker.

Lepore’s article, “The Disruption Machine: What The Gospel of Innovation Gets Wrong”, appears to pick apart the foundation of Christensen’s work. Some of the examples from 1997 seem less prescient now. Some companies that were disrupted in the 90s have recovered since. (Perhaps we did get smarter). Disruptive companies, on the other hand, have not necessarily thrived. (Perhaps they, too, were disrupted).

Lepore points out that Christensen started a stock fund based on his theories in March 2000. Less than a year later, it was “quietly liquidated.” Unfortunately, she doesn’t mention that March 2000 was the very moment that the Internet bubble burst. Christensen may have had a good theory but he had terrible timing.

But what really irks Lepore is given away in her subtitle. It’s the idea that Christensen’s work has become “gospel”. People accept it on faith and try to explain everything with it. Consultants have take Christensen’s ideas to the far corners of the world. (Full disclosure: I do a bit of this myself). In all the fuss, Lepore worries that disruptive innovation has not been properly criticized. It hasn’t been picked at in the same way as, say, Darwinism.

Lepore may be right but it doesn’t mean that Christensen is wrong. In the business world, we sometimes take ideas too literally and extend them too far. As I began my career, Peters and Waterman’s In Search of Excellence was almost gospel. We readers probably fell in love a little too fast. Yet Peters and Waterman had – and still have — some real wisdom to offer. (See The Hype Cycle for how this works).

I’ve read all but one of Christensens’s books and I don’t see any evidence that he promotes his work as a be-all, end-all grand Theory of Everything. He’s made careful observations and identified patterns that occur regularly. Is it a religion? No. But Christensen offers a good explanation of how an important part of the world works. I know. I’ve been there.

 

 

 

 

Gray Hair and Innovation

I'm just peaking.

I’m just peaking.

How old are people when they’re at their innovative peak? I worked in the computing industry and we generally agreed that the most innovative contributors were under 30. Indeed, sometimes, they were quite a bit under 30.

Some of this is simply not knowing what can’t be done. I’ve seen this with Elliot. He doesn’t know how a computer is “supposed” to work. So he just tries things … and very often they work. On the other hand, I do know how a computer is supposed to work and I sometimes don’t try things because I “know” they won’t work. Elliot just doesn’t have the same limits on his thinking. That can be a great advantage in a new field.

While youth may be an advantage in software, it’s not true in many other fields. In pharmaceuticals, for instance, the most innovative people are in their 50s or even 60s. It takes that long to master the knowledge of biology, chemistry, and statistics needed to make original contributions. Comparatively speaking, it’s easy to master software.

Indeed, as knowledge gets more complicated, it takes longer to master. According to Benjamin F. Jones of the Kellogg School of Business, “The mean age at great achievement for both Nobel Prize winners and great technological inventors rose by about 6 years over the course of the 20th Century.” The average Nobel prize winner now conducts his or her breakthrough research around the age of 38 – though the prize is typically awarded many years later.

Aside from domain knowledge, why might you want a little gray hair to fuel innovation in your company? According to a recent article by Tom Agan in the New York Times, one reason is the time necessary to commercialize an innovation. As a general rule, the more fundamental an innovation, the longer it takes to commercialize. Ideas need to percolate. People need to be educated. Back-of-the-envelope sketches need to be prototyped. Lab results need to be scaled up. It takes time – perhaps as much as 20 to 30 years.

Who’s best at converting the idea to reality? Typically, it’s the person or persons who created the innovation in the first place. So, let’s say someone makes a breakthrough at the Nobel-average age of 38. You may need to keep them around until age 58 to proselytize, educate, socialize, realize, and monetize the idea. In the meantime, it’s likely that they will also enhance the idea and, just possibly, kick off a new round of innovation.

So, what to do? Once again, diversity pays. Mixing employees of multiple age groups can help stimulate new ways of thinking and better ways of communicating. Ultimately, I like Meredith Fineman’s advice: “Working hard, disruption, and the entrepreneurial spirit knows no age. To judge based upon it would be juvenile.”

Pour Me a House. Print Me a Cookie.

Print this!

Print this!

When Elliot was in architecture school, he designed a chair in 3D software. Then he printed it. Then he sat in it. It held up pretty well.

As a designer, Elliot was an early adopter of 3D printing, also known as additive manufacturing. Elliot designs an object in 3D virtual space within a computer. (He’s an expert at this). The object exists as a set of mathematics, describing lines, arcs, curves, shapes, and so on.

Elliot then exports the mathematical description of the object to a 3D printer. The printer converts the math into hundreds of very thin layers – essentially 2D slices. The printer head zips back and forth, laying down a slice with each pass to build the product physically. It’s called “additive manufacturing” because the printer adds a new layer with each pass.

The earliest such printers might have been called “subtractive manufacturing.” You started with a big block of wax in the “printer”. You then loaded the mathematical description and the printer carved away the unnecessary wax using very precise cutting blades. The result was the object modeled in wax. You used the model to build a mold for manufacturing.

Elliot used a printer equipped with a laser and some very special powder. Based on the mathematical description of the slices, the laser moved back and forth, firing at appropriate points to build each layer. On each pass, the laser converted the powder into a very strong, very hard resin that adhered to the previous layer. At the end, Elliot had the finished product, not just a mold.

Elliot’s chair looked and felt like it was made of plastic. Several companies are now experimenting with metal oxides that use a similar process to print metal objects. A British company, Metalysys, is working with a titanium oxide that should allow you to print titanium objects. One benefit: it should dramatically reduce the cost of titanium parts and products.

Newer 3D printers can use a nozzle to extrude material onto each slice. What can you extrude? Well, cement, for instance. Construction companies in Europe are already using robotic arms and cement extruders to build complex walls and structures. It won’t be long before Elliot can design an entire house in virtual space and then have it poured on site. Elliot will be able to create much more imaginative designs (like the one above) and print them at a lower cost than traditional building techniques. What a great time to be an architect!

Not interested in cement? How about extruding some cookie dough instead? In fact, let’s imagine that you have some special dietary needs and restrictions. You submit your dietary data to the printer, which selects a mix of ingredients that meets your needs, and prints you a cookie. You can select ingredients based on your tastes as well as your dietary needs. What a great time to be a chef!

What’s next? GE recently announced that it would use additive manufacturing to create jet engine parts. Before long, we may be able to print new body parts. (I’m waiting for a new brain). And, 3D printing is coming to your home. Click here to find out how you can make anything. What a great time to be a nerd!

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives