If you met somebody from your third grade class, would you recognize her? How about someone you last saw a decade ago at a company where you used to work? How about a person you saw in a mug shot at the Post Office?
If you answered “yes” to any of these, you may be a super-recognizer. Super-recognizers literally never forget a face. They may also give us the next great leap forward in law enforcement.
We haven’t known about super recognizers for very long. Over the past 20 years or so, researchers have learned a great deal about the opposite condition known as prosopagnosia or face blindness. Some people – perhaps two percent of the population — just can’t remember faces. They’re “blind” to the faces around them. They can interact with you perfectly well while they’re with you but they won’t recognize you the next time they see you.
Researchers initially thought that this was a binary condition – either you were normal or you were face blind. Then someone had the bright idea that the ability to recognize faces might be distributed along a normal curve. If face blind people are clustered under one tail of the curve then the other tail should include people who are exceptionally good at recognizing faces – the super-recognizers.
It turns out that they were right. In 2009, Richard Russell and his colleagues published the first academic paper on the subject: “Super-recognizers: People with extraordinary face recognition ability”.
It seems like a typical academic topic but the story took an unusual twist when the Metropolitan Police Service in London took up the idea. As detailed in a recent story in The New Yorker, the Met experimented with super-recognizers as detectives. London has more security cameras than any other city in the world but couldn’t turn the images into a crime-fighting advantage. The city had millions of low-resolution images of potential criminals and nobody to interpret them.
The Met tried to change that with an organized team of super-recognizers. The super-recognizers browse through mug shots and then review footage from security cameras that have recorded a crime. In a surprising number of cases, the super-recognizer has an “aha” moment and links a miscreant to a mug shot.
How good are they? The Met calls super-recognizers “the third revolution in forensics” after fingerprints and DNA evidence. The Met solves about 2,000 cases a year with fingerprints and another 2,000 with DNA. By comparison, the super-recognizers solve about 2,500 cases.
At this point, you may be wondering just how good you are at recognizing faces. Here’s how to find out – the Cambridge Face Memory Test. Click here and you can take the same test that the Met uses to screen applicants for the super-recognizer team. If you get a high score, you might just apply for a position with your local police force.
Four years ago, I wrote a somewhat pessimistic article about Jevons paradox. A 19th-century British economist, William Jevons, noted that as energy-efficient innovations are developed and deployed, energy consumption goes up rather than down. The reason: as energy grows cheaper, we use more of it. We find more and more places to apply energy-consuming devices.
Three years ago, I wrote a somewhat pessimistic article about the future of employment. I argued that smart machines would either: 1) augment knowledge workers, making them much more productive, or; 2) replace knowledge workers altogether. Either way, we would need far fewer knowledge workers.
What if you combine these two rather pessimistic ideas? Oddly enough, the result is a rather optimistic idea.
Here’s an example drawn a recent issue of The Economist. The process of discovery is often invoked in legal disputes between companies or between companies and government agencies. Each side has the right to inspect the other side’s documents, including e-mails, correspondence, web content, and so on. In complex cases, each side may need to inspect massive numbers of documents to decide which documents are germane and which are not. The actual inspecting and sorting has traditionally been done by highly trained paralegals – lots of them.
As you can imagine, the process is time-consuming and error-prone. It’s also fairly easy to automate through deep learning. Artificial neural networks (ANNs) can study the examples of which documents are germane and which are not and learn how to distinguish between the two. Just turn suitably trained ANNs loose on boxes and boxes of documents and you’ll have them sorted in no time, with fewer errors than humans would make.
In other words, artificial neural networks can do a better job than humans at lower cost and in less time. So this should be bad news for paralegal employment, right? The number of paralegals must be plummeting, correct? Actually no. The Economist tells us that paralegal employment has actually risen since ANNs were first deployed for discovery processes.
Why would that be? Jevons paradox. The use of ANNs has dramatically lowered the obstacles to using the discovery process. Hence, the discovery process is used in many more situations. Each discovery process uses fewer paralegals but there are many more discovery processes. The net effect is greater – not lesser – demand for paralegals.
I think of this as good news. As the cost of a useful process drops, the process itself – spam filtering, document editing, image identification, quality control, etc. – can be deployed to many more activities. That’s useful in and of itself. It also drives employment. As costs drops, demand rises. We deploy the process more widely. Each human is more productive but more humans are ultimately required because the process is far more widespread.
As a teacher, this concept makes me rather optimistic. Artificial intelligence can augment my skills, make me more productive, and help me reach more students. But that doesn’t mean that we’ll need fewer teachers. Rather, it means that we can educate many, many more students. That’s a good thing – for both students and teachers.
Men’s fashions change very slowly. By and large, the shirts I wore in high school are still in fashion. (Too bad they don’t fit). In fact, I bet that many of the shirts my Dad wore in high school would still be in fashion. So, there’s not much room for innovation in men’s shirts, is there? If it ain’t broke, don’t fix it.
It’s a good thing that Elliot Gant didn’t get the memo. Elliot, who started Gant Shirtmakers with his brother Martin in 1949, died a few days ago at age 89. The Gant brothers innovated where most other managers never even thought about it. The Gants observed closely and made a series of innovative enhancements. Though each innovation was small, the cumulative effect was huge (as they say in New York).
Here’s how the New York Times describes how the Gant brothers changed and enhanced the traditional button down shirt.
The Gant brothers perfected the collar’s shape, known as the perfect roll, formed by the front edges of the buttoned collar. They introduced the box pleat in the back to allow more freedom of movement, the extra button in the back of the collar to keep the tie in place, and the patented button tab that connects beneath the necktie to push the knot up and out. (The tab won an award from Esquire magazine.)
They also introduced the hanger loop on the back of the shirt so that it could be hung on a hook — in a locker, say — without wrinkling.
The Times writes that the locker loop became, “…a collegiate and high school totem: A young man would remove it from his shirt to signal that he was going steady.” I remember it somewhat differently. If a girl liked you, she would walk up behind you in the high school hallway, pull the loop off the back of your shirt, and keep it as a memento. Sadly, most of my shirts still had their loops.
How did the Gant brothers create a teenage totem from something as unoriginal as a shirt? According to the Gant website, the Gant brothers knew their customer well and focused on a very specific segment: young, preppy men. They never let their gaze wander. They also dedicated themselves to quality as an important differentiator in a largely undifferentiated market.
They also innovated in advertising and marketing communications. They relocated their company from Brooklyn to New Haven, Connecticut largely because New Haven had a large population of skilled tailors. It also, of course, had Yale University. As the website notes, the Gant brothers used Yale for design inspiration and also for marketing. They created the American East Coast University look, which was “distinctive and debonair”. It was what the cool kids wore.
Gant chose to market in nontraditional ways as well. They started with the Yale Co-op, a campus store “…where [students] went to buy clothing and as the Ivy League Look exploded the Yale Co-op was the nexus of the new style.”
Gant also advertised in non-traditional media – non-traditional for men’s clothing at least. They started with small one-eighth page ads in The New Yorker, a magazine for the smart set. They grew upward and outward from here.
When we think of innovation today, we of then think of big, audacious game-changers – artificial intelligence, robots, automatons, and so on. But let’s remember that we can innovate on a smaller scale as well. Changing where and how you place a button on a shirt can create a valuable brand, important differentiation, and even a teenage totem. Innovation doesn’t require large-scale genius. It simply requires observation, imagination, and dedication. Thinking small is just as important as thinking big.
One of the most important obstacles to innovation is the cultural rift between technical and non-technical managers. The problem is not the technology per se, but the communication of the technology. Simply put, technologists often baffle non-technical executives and baffled executives won’t support change.
To promote innovation, we need to master the art of speaking between two different cultures: technical and non-technical. We need to find a common language and vocabulary. Most importantly, we need to speak to business needs and opportunities, not to the technology itself.
In my Managing Technology class, my students act as the CIO of a fictional company called Vair. The students study Vair’s operations (in a 12-page case study) and then recommend how technical innovations could improve business operations.
Among other things, they present a technical innovation to a non-technical audience. They always come up with interesting ideas and useful technologies. And they frequently err on the side of being too technical. Their presentations are technically sound but would be baffling to most non-technical executives.
Here are the tips I give to my students on giving a persuasive presentation to a non-technical audience. I thought you might find them useful as well.
Benefits and the so what question – we often state intermediary benefits that are meaningful to technologists but not meaningful to non-technical executives. Here’s an example, “By moving to the cloud, we can consolidate our applications”. Technologists know what that means and can intuit the benefits. Non-technical managers can’t. To get your message across, run a so what dialogue in your head,
Statement: “By moving to the cloud, we can consolidate our applications.”
Question: “So what?”
Statement: “That will allow us to achieve X.”
Question: “So what?”
Statement: “That means we can increase Y and reduce Z.”
Question: “So what?”
Statement: “Our stock price will increase by 12%”
Asking so what three or four times is usually enough to get to a logical end point that both technical and non-technical managers can easily understand.
Give context and comparisons – sometimes we have an idea in mind and present only that idea, with no comparisons. We might, for instance, present J.D. Edwards as if it’s the only choice in ERP software. If you were buying a house, you would probably look at more than one option. You want to make comparisons and judge relative value. The same holds true in a technology presentation. Executives want to believe that they’re making a choice rather than simply rubber-stamping a recommendation. You can certainly guide them toward your preferred solution. By giving them a choice, however, the executives will feel more confident that they’ve chosen wisely and, therefore, will support the recommendation more strongly.
Show, don’t tell – chances are that technologists have coined new jargon and acronyms to describe the innovation. Chances are that non-technical people in the audience won’t understand the jargon — even if they’re nodding their heads. Solution: use stories, analogies, or examples:
Words, words, words – often times we prepare a script for a presentation and then put most of it on our slides. The problem is that the audience will either listen to you or read your slides. They won’t do both. You want them to listen to you – you’re much more important than the slides. You’ll need to simplify your slides. The text on the slide should capture the headline. You should tell the rest of the story.
If you follow these tips, the executives in your audience are much more likely to comprehend the innovation’s benefits. If they comprehend the benefits, they’re much more likely to support the innovation.
(If you’d like a copy of the Vair case study, just send me an e-mail. I’m happy to share it.)
When I worked for business-to-business software vendors, I often met companies that were simply out of date. They hadn’t caught up with the latest trends and buzzwords. They used inefficient processes and outdated business practices.
Why were they so far behind? Because that’s the way their software worked. They had loaded an early version of a software system (perhaps from my company) and never upgraded it. The system became comfortable. It was the ways they had always done it. If it ain’t broke, don’t fix it.
I’ve often wondered if we humans don’t do the same thing. Perhaps we load the software called Human 1.0 during childhood and then just forget about it. It works. It gets us through the day. It’s comfortable. Don’t mess with success.
Fixing the problem for companies was easy: just buy my new software. But how do we solve the problem (if it is a problem) for humans? How do we load Human 2.0? What patches do we need? What new processes do we need to learn? What new practices do we need to adopt?
As a teacher of critical thinking, I’d like to think that critical thinking is one element of such an upgrade. When we learn most skills – ice skating, piano playing, cooking, driving, etc. – we seek out a teacher to help us master the craft. We use a teacher – and perhaps a coach – to help us upgrade our skills to a new level.
But not so with thinking. We think we know how to think; we’ve been doing it all our lives. We don’t realize that thinking is a skill like any other. If we want to get better at basketball, we practice. If we want to get better at thinking, ….well, we don’t really want to get better at thinking, do we? We assume that we’re good enough. If the only thinking we know is the thinking that we do, then we don’t see the need to change our thinking.
So how do we help people realize that they can upgrade their thinking? Focusing on fallacies often works. I often start my classes by asking students to think through the way we make mistakes. For instance, we often use short cuts – more formally known as heuristics – to reach decisions quickly. Most of the time they work – we make good decisions and save time in the process. But when they don’t work, we make very predictable errors. We invade the wrong country, marry the wrong person, or take the wrong job.
When we make big mistakes, we can draw one of two conclusions. On the one hand, we might conclude that we made a mistake and need to rethink our thinking. On the other hand, we might conclude that our thinking was just fine but that our political opponents undermined our noble efforts. If not for them, everything would be peachy. The second conclusion is lazy and popular. We’re not responsible for the mess – someone else is.
But let’s focus for a moment on the first conclusion – we realize that we need to upgrade our thinking. Then what? Well… I suppose that everyone could sign up for my critical thinking class. But what if that’s not enough? As people realize that there are better ways to think, they’ll ask for coaches, and teachers, and gurus.
If you’re an entrepreneur, there’s an opportunity here. I expect that many companies and non-profit organizations will emerge to promote the need and service the demand. The first one I’ve spotted is the Center for Applied Rationality (CFAR). Based in Berkeley (of course), CFAR’s motto is “Turning Cognitive Science Into Cognitive Practice”. I’ve browsed through their web site and read a very interesting article in the New York Times (click here). CFAR seems to touch on many of the same concepts that I use in my critical thinking class – but they do it on a much grander scale.
If I’m right, CFAR is at the leading edge of an interesting new wave. I expect to see many more organizations pop up to promote rationality, cognitive enhancements, behavioral economics, or … to us traditional practitioners, critical thinking. Get ready. Critical thinking is about to be industrialized. Time to put your critical thinking cap on.