Sunday 11 June 2017

Why do pollsters get it wrong?

Whilst the headline writers have spent the last two days poring over the entrails of the UK general election and what it means for the future direction of economic policy, I have been wondering how the pollsters could get it so wrong – again. After all, this is the third successive UK plebiscite since 2015 in which the electoral pundits have failed to call the result – not to mention their failure to call the US presidential result.  If the polls this week had been even slightly more accurate, the result would not have come as such a surprise and we would not have spent the last three days having many of the debates about Theresa May’s future.

First, however, some sense of perspective is in order. The opinion polls did a pretty good job in getting the voting shares right. The analysts at Electoral Calculus, for example, predicted that the Conservative vote share would rise from 37.8% in 2015 to 43.5% this time around whilst the Labour share would increase from 31.2% to 40%. In the event, the final vote shares were 42.4% and 40% respectively, so there was a modest overshoot on the Conservative share but they got Labour spot on. But it is a lot harder to go from there to actually predicting the election result, because the regional vote distribution matters hugely. In the UK’s first-past-the-post system, parties only need to outperform their local rivals by the tiniest of margins to win a seat (indeed, one constituency was decided by a margin of just 2 votes – 0.00478% of the votes cast). Once we start digging below the national level, the issue becomes fraught with sample size problems and the margins of error become much wider.

The opinion polls clearly narrowed over the course of the campaign. But even by the time the final polls were published on Wednesday night, the 15-poll average was still showing a Conservative lead of 6 points – down from 20 in the first half of May – with their polling share only back where it was when the election was called (43%). The central case forecast was thus not suggestive of a hung parliament. But if we apply a 5% margin of error, by adjusting down the Conservative figure and raising Labour’s polling share by this amount, although the trend does not change the extent of the lead does. Rather than a 6% margin the Conservatives went into the election with a 2% lead on this basis (see chart).

Applying this level of statistical inaccuracy in trying to predict the number of seats becomes a whole order of magnitude more difficult. In addition to Electoral Calculus (EC), I have also been tracking the results derived by the Election Forecast group (EF).  EC predicted that the Conservatives would win 358 seats whilst EF’s projection was 366 (though EF’s low case scenario did predict that the Conservatives would win 318 seats –  the right answer as it happens – whilst EC’s low estimate was 314). The central case predictions were thus off by more than 10%. They were even further off in their predictions for Labour with EC predicting 218 seats and EF (where the outturn was even higher than their upper limit) projecting 207. One group which did predict a hung parliament was YouGov whose “big data” model proved to be right, but their final call based on conventional survey methods was for a wider Conservative majority.

One of the reasons for the apparent failure of conventional methods was that most polling organisations discounted the evidence suggesting that younger age groups would vote Labour, and assumed that many of them would stay at home, as happened in 2015. This is an object lesson in the perils of manual adjustment – something which I do all the time when using structural macroeconometric models and more often than not, this turns out to be justified. It is always galling when the model beats your prior view, but ironically, the next time you let the model run without overwriting the results you often find you would have been justified in overriding it.

YouGov provided a non-technical summary of their Multilevel Regression and Post-stratification (MRP) model which seemed to work so well (here). It takes polling data from the preceding seven days to estimate a model relating interview date, constituency, voter demographics, past voting behaviour and other profile variables to their current voting intentions. This is used to estimate the probability that a voter with specified characteristics will vote for a particular party. Obviously, it is not infallible: It is a snapshot of intentions at the time the survey is made. In addition, the models are based on very small sample sizes so they suffer from the usual bias problems. Like all models, they are subject to significant margins of error, and as this blog post highlights they need to be treated cautiously. Indeed, I assign a huge degree of mistrust to regression models for predictive purposes because, as noted in the post, MRP “is a useful tool, but potentially misleading if used carelessly or indiscriminately.”

Ultimately, I suspect that trying to predict the detailed results of elections in the multi-media age is going to become ever harder. As information is thrown at us ever more rapidly, we will have to learn to assimilate it more quickly, and our quantitative models will have to take on board information from sources such as Twitter (already possible in statistical packages such as R). I am sure that in the course of the next week, some bright spark will ask me why I failed to get the election result right. The simple answer is because it’s hard to do, so I leave it to those with the expertise, time and resources to do it. And even they struggle, so what chance have I got?

No comments:

Post a Comment