A postmortem on 2020 forecasting

I’m not really sure why it’s taken him so long, but it’s still worth having a look this week at Nate Silver’s review of how his forecasting models at FiveThirtyEight.com performed in last year’s United States elections.

Silver (who is, no question, a highly talented psephologist and statistician) has a bit of a tendency to self-satisfaction, so it’s perhaps no surprise that he finds his site’s performance to have been pretty good. Not only did the models pick most races correctly, but, as he says, they “were generally well-calibrated. That is to say, the share of races we called correctly was roughly in line with the probabilities listed by our models.”

Last year was seen as a pretty bad one for the opinion polls. It’s therefore interesting that Silver finds that in the House and Senate elections, where he provided three different models, it was the simplest (“Lite”) one – the one that relied mostly just on the polls, without additional data like fundraising and expert opinions – that performed the best. But that didn’t surprise me; I always thought that Silver’s extra bells and whistles were more likely to just amplify polling errors without adding anything useful.

In the presidential election (where there was just one model), FiveThirtyEight got just three calls wrong: Florida, North Carolina and the second district in Maine. They were all cases the model described as “leaning” or “toss-ups”; it got six of the nine forecasts in those two categories correct, which is statistically in line with expectations. All the results in the “solid” and “likely” categories matched the model’s forecasts.

There’s a slight irony here, in that Silver shot to fame in 2012 when his model got every state correct in that year’s presidential election – a triumph in one sense, but if you believe his model, then also something of a fluke. A modest number of errors is what the model itself tells you to be more likely.

The errors, of course, were not random. All three were Republican victories where the model had favored the Democrats. That one-sidedness carries through to the congressional elections: the simple model only got 17 of the 472 House and Senate contests wrong, but 15 of those 17 were Republican victories. So while the Democrats won majorities in both houses, they were much tighter than the model expected. “[T]he large majority of the races that were expected to be close went the same way.”

Silver points out that to some extent, that’s entirely logical. Different races in the same election are not independent events; swings are correlated, and so errors should be correlated as well. But he suggests that this tendency is on the increase:

[I]t may be that polling and related-forecast errors are becoming even more correlated, due to rising partisanship and a decline in split-ticket voting. … The more correlated that race outcomes are, the wider the confidence intervals in forecasting how many seats a party will win overall, since it’s less likely that a miss in one direction (…) will be offset by a miss in the opposite direction.

Finally, there’s that strange beast, the electoral college. As I pointed out at the time, although Joe Biden’s margin of victory looks reasonably clear both in the popular vote and in electoral college seats, the margin required to swing the electoral college is very small: just 0.3% in Wisconsin, the median state. Silver notes that this closeness might seem inconsistent with the fact that his model gave Biden an 89% chance of an electoral college majority.

Here’s his response:

I’m not super sympathetic to this critique, to be honest. For one thing, Biden’s 89 percent chance of victory did not mean an 89 percent chance of a blowout. On the contrary, precisely the reason that Biden’s chances were so high is that he could endure a fairly large polling error and still come out ahead, something which had not been true for Hillary Clinton four years earlier.

There’s no simple way of saying who’s right there; you can’t judge a probabilistic forecast on just one outcome. But it does seem to me that at least some of the model’s confidence in the electoral college result (which I fully shared at the time) turned out to be mistaken.

Do read the whole thing – there’s a lot of thoughtful stuff about how polling and forecasting work. And as the Republican Party seemingly accelerates its flight from democracy, getting elections right is not going to become any less important.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.