Strategy. Innovation. Brand.

Philip Tetlock

Hedgehogs, Foxes, and The Future

Not Louise.

Not Louise.

My friend, Louise, is a world-class forecaster. I’m trying to figure out how she does it.

Louise and I both volunteered for the Good Judgment Project, a crowd-sourced forecasting tournament for world events. Here’s a sample of the questions we’re forecasting:

When will China next conduct naval exercises in the Pacific Ocean beyond the first islands chain?

When will SWIFT next restrict any Russian banks from accessing its services?

When will Ethiopia experience an episode of sustained domestic armed conflict?

For each question, we get 100 points and a calendar. We distribute the 100 points based on when we expect an event to happen. If we don’t expect the event to happen, we simply place all 100 points beyond the calendar.

Philip Tetlock, who wrote the landmark study, Expert Political Judgment: How Good Is It? How Can We Know?, started the Good Judgment Project to improve forecasting of political events worldwide. Why would we need to improve our forecasting? Because experts are really lousy at it.

In his book, Tetlock studied 28,000 projections made by 284 experts. The results were little better than chance. Computers could do better. Tetlock surmised that crowds might do even better and started the Good Judgment tournament.

The tournament starts anew every year with several hundred volunteers. Louise and I participate in a tournament that started last December. We’ve been forecasting for close to three months. I looked up the standings last week. Louise was number one worldwide. I was number 48.

How does Louise do it? It could be that she’s a fox as opposed to a hedgehog. According to the Greek poet Archilochus, “The fox knows many things but the hedgehog knows one big thing.” Isaiah Berlin popularized the idea in the 1950s with his study of Tolstoy, titled The Hedgehog and The Fox.

So, which is better: a hedgehog or a fox? As with so many things, it depends on what you’re aiming to do. In Good To Great, Jim Collins argues that you need a hedgehog mentality to build a great company. You need to know one big thing and stick to it.

But what if you’re trying to forecast the future? Tetlock argues persuasively that foxes are better than hedgehogs. Why? Here’s how Stewart Brand explains it: “…hedgehogs have one grand theory (Marxist, Libertarian, whatever) which they are happy to extend into many domains, relishing its parsimony, and expressing their views with great confidence. Foxes, on the other hand are skeptical about grand theories, diffident in their forecasts, and ready to adjust their ideas based on actual events.”

It’s probably fair to guess that Louise is a fox. She doesn’t have one grand theory that explains everything from SWIFT financial transactions to Chinese naval maneuvers. She’s also flexible in her thinking. She’s willing to pick ideas from different sources and change her position if the evidence warrants. She gathers feedback and uses it to adapt and adjust.

In Tetlock’s studies, foxes outperform hedgehogs by a wide margin – in forecasting, though perhaps not in building great companies. Hedgehogs on the other hand, “…even fare worse than normal attention-paying dilettantes.”

Louise and I – and other volunteers in the Good Judgment Project – are probably normal attention-paying dilettantes. We follow current events but we don’t have grand theories that explain everything. In some cases, I suspect that Louise doesn’t know what to think. But she does know how to think. Like a fox.

My Good Judgment

Who's the better forecaster?

Who’s the better forecaster?

We’re all familiar with the idea of placing a bet on a football match. You can bet on many different outcomes: which team will win; by how much; how many total goals will be scored on so on. With a large number of bettors, the aggregate prediction is often remarkably accurate. It’s what James Surowiecki calls The Wisdom of Crowds.

Prediction markets aim to do the same thing but broaden the scope. Instead of betting on sports, they bet on political or economic or natural events. For instance, What’s the probability that: Greece will exit the Euro in 2015; or that nuclear weapons will be used in the India/Pakistan conflict before 2018; or that Miami will have more than 100 flood days by 2020?

The forecasting questions are quite precise and always bounded by a time limit. There should be no question whether the event happens or not. In other words, we can actually judge how accurate the forecasts are.

Why is judging so important? As Philip Tetlock pointed out in Expert Political Judgment: How Good Is It? How Can We Know?, we traditionally don’t measure the accuracy of expert political predictions. Pundits make predictions and nobody checks them. Indeed, Tetlock argues that most pundits make predictions as a way to advertise their consulting businesses. The bolder the prediction, the more powerful the ad.

When Tetlock actually measured the accuracy of expert political predictions, he discovered they were essentially useless. Tetlock writes, “The results were startling. The average expert did only slightly better than random guessing.” Remember that the next time you read an expert prediction.

You may remember that I wrote about a prediction market – InTrade – during the 2012 elections in the United States. Based in Ireland, InTrade allowed people all over the world to place bets on who would win the presidential election, as well as various Senate, gubernatorial, and congressional elections. InTrade’s electoral predictions were remarkably accurate. (It did less well in predicting Supreme Court decisions).

Unfortunately, the U.S. government saw InTrade as a form of online gambling. As such, it needed to be tightly regulated or perhaps even suppressed. It’s a complicated story — and may have involved “financial irregularities” on InTrade’s part — but, in 2013, InTrade decided to close its doors.

So, how can we use prediction markets in the United States? In its wisdom, another agency of the federal government, the Intelligence Advanced Reesarch Projects Activity (IARPA), started a prediction tournament called Aggregative Contingent Estimation (ACE). IARPA/ACE has run a prediction tournament for the past three years. Various teams – mainly from academic institutions – participate for the honor of being named the most accurate forecaster.

And who wins these tournaments? A team called The Good Judgment Project (GJP) put together by none other than Philip Tetlock. GJP selects several thousand volunteers, gives them some training on how to make forecasts, and asks them to forecast the several hundred questions included in the IARPA/ACE tournament.

The Good Judgment project wins the tournament consistently. They must be doing something right. And who is the newest forecaster on the Good Judgment team? Well… with all due modesty, it’s me.

To say the least, I’m excited to participate – and I expect to write about my experiences over the coming months. I can predict with 70% confidence that I won’t be a world class forecaster in the first go round. But I may just learn a thing or two and improve my accuracy over time. Wish me luck.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives