When Theresa May did a sudden reverse-flip and called a snap election back in April, the collective reaction on twitter was mainly:


“Why are you backtracking like this? Don’t you have a majority already? Why do you need a bigger one? HAVE WE LEARNED NOTHING FROM BREXIT AND TRUMP?”




Well, girlfriend was proud. And girlfriend just had herself a fall.

But it does give me the opportunity to talk about making bad calls!

The Science of Bad Calls

There is a great Tim Harford lecture at the LSE that is worth a listen if you’re into podcasts. Here’s a link: How To See Into The Future – By Sound. There’s also an (excellent) article in the FT if you’re a fan of the written medium: How To See Into The Future – By Sight.

The main points:

  • Philip Tetlock is the author of “Expert Political Judgement”.
  • That book was the result of an 18 year long experiment, where he collected almost 28,000 predictions, made by 284 commentators, across multiple fields of expertise, and then observed the accuracy of their outcomes.
  • Mostly, he was motivated by his irritation with political commentators making predictions about rise of Gorbachev and the future of the Soviet Union in the 1980s (the right-wingers thought that what Reagan was doing was deeply dangerous, and idiotic; the left wing pundits were convinced that the Soviet Union was with us to stay).
  • In particular, he was annoyed by commentators saying this sort of thing after being proved wrong:
    • “It almost went my way”
    • “It was the right mistake to make under the circumstances”
    • “I’ll be proved right – mark my words”
    • “The main thing to focus on is the moral evil of what is going on here.”
  • His main findings:
    • There is almost no difference between chimpanzees flipping a coin and human experts making a prediction.
    • And crude extrapolation models were significantly better than human experts. The quote: “whereas the best human forecasters were hard-pressed to predict more than 20 percent of the total variability in outcomes…, the generalized autoregressive distributed lag models explained on average 47 percent of the variance.”
    • He then tested for whether some human forecasters are better than others:
      • Your background and accomplishments are irrelevant. Unless you’re a bit famous – in which case, you’re likely to make more wrong predictions (awkward).
      • Belief systems made little difference – libertarians were wrong as often as Marxists, capitalists as often as communists, etc.
      • But your reasoning style did make a difference. And Philip Tetlock used Isaiah Berlin’s “The Fox and the Hedgehog” analogy to explain it.
Spoiler Alert: Theresa May is a Hedgehog

The world has two types of expert:

  1. Foxes – who are pluralists and tend to cherrypick their ideas from multiple places.
  2. Hedgehogs – who are puritans, committed to a particular world view. The die-hard Keynesians, and die-hard Libertarians, and the die-hard Socialists, etc.

A summary of their reasoning styles:

Thanks this website
Thanks this website

Here’s the version according to Nate Silver (of FiveThirtyEight fame):

Thanks this website
Thanks this website

And the basic summary is this:

  • Foxes are right more often than Hedgehogs, especially in short-term forecasts.
  • But Hedgehogs get more of the far out predictions right (albeit at the expense of getting so much else wrong in the interim).

So in general, foxes make better calls.

But if you want to make the best calls, be a crude computer model. Because regardless of whether you’re a Hedgehog or a Fox, the crude computer models are better forecasters.


Oddly enough, one of the people least convinced by Philip Tetlock’s findings turned out to be Philip Tetlock himself, who maintained that there really are good forecasters out there.

So alongside a long list of academics, he started the Good Judgment project (here’s the website) to try and establish whether they could find any; and if so, what made them different to all the other ‘forecasters’ on the market.

The good preliminary news is: good forecasters exist. He calls them “the superforecasters” – and they’re just better than everyone at calling things right.

Here is four minutes of Tim Harford explaining what makes for a superforecaster.

If you’re too busy to watch the clip, here are:

The three summarised secrets of superforecasting

1. Feedback

Also known as “learning from your mistakes” instead of “explaining why you were right to be wrong”. Superforecasters will go back and learn from their bad predictions – because this tells them what they should, and what they should not, be making predictions about.

The basic idea being: some things you can forecast about with some accuracy; some things, less so. So don’t make predictions for the second kind, because it makes you a much more average forecaster on average, rather than being a good forecaster within a limited sphere.

2. Work in Teams

Other people make you justify your predictions. They point out flaws in your logic, forcing you to rethink and identify your own logical flaws.

There’s also less room for bias when people are busy accusing you of it*.
*Of course, you could refuse to see that you’re biased. But then you’d be a bad forecaster. Because you’d be unable to work in teams. And see point 3 below.

3. Open-mindedness

Being convinced that you’re right, and refusing to consider alternatives…well that’s probably unhelpful when there is literally an entire infinite Universe of things that you don’t know.

So really, superforecasters are:

  • Humble foxes
  • Who work in teams
  • And who have learned to tell the difference between the predictable and the unpredictable

Theresa May proudly went it alone, ignoring how bad the calls were around Brexit and Trump.

It’s also the problem with snap elections, you see.

Sometimes, they snap back.


Rolling Alpha posts opinions on finance, economics, and sometimes things that are only loosely related. Follow me on Twitter @RollingAlpha, and on Facebook at www.facebook.com/rollingalpha.