EU Referendum Polls: Modelling Voting Intention

In the two latest opinion polls on the EU referendum released by ICM last week,  one poll put remain ahead by 9 percentage points, while the other gave the lead to the leave campaign by four percentage points. How can two polls released on the same day by the same polling company produce such radically different results?

The answer is that one poll (the one that favoured the remain outcome) was conducted by telephone, whereas the other was conducted online. This is an example of a theme that has emerged from the EU referendum polls: there is a consistent difference between results of polls conducted by different polling methods.

The broader point is that we should never place much trust in any single poll. Opinion polls are usually based on small samples (often less than 2000 people) and the practicalities of obtaining the sample inevitably result in systematic biases due to the sample being unrepresentative of the voting population. As such, the day to day movement in polls is generally a poor indication of shifts in true voting intentions. One way to try to get around this problem is to look at a “poll of polls”, which is an aggregate of several polls that aims to improve accuracy by offsetting individual polls’ biases against one another.

We should never place much trust in any single poll, which is likely to be unrepresentative of the voting population.

A simple poll aggregation method is to average the results of the last several polls, updating as new polls are released to produce a moving average. This is the method used by What UK Thinks: EU for example. Select Statistics has taken this several steps further by using a statistical model to estimate voting intentions. Our model is similar to those used to estimate support for Scottish independence and voting intentions in the forthcoming Australian general election and is based upon the research paper by Jackman (2005).  An important feature of this model is that it estimates voting intention whilst simultaneously accounting for several sources of potential bias. The model also weights the polls according to the number of respondents (since bigger polls tend to be more precise), and it includes the effects of polling method (online versus telephone) and systematic differences between polling companies. It also accounts for the potential difference between polls where “undecided” is offered as an option and those where it is not (more about this below).

Our model’s estimate of voting intention since September 2015 is shown in the chart below. Over the last 10 days the estimate has moved in favour of remain, and our best guess at the result if a snap referendum were held today is 53% voting to remain versus 47% voting to leave.

alpha_final

Estimated percentage intending to vote to remain since September 2015, along with the results of individual polls.

result_vote_final - CopyAs well as better accounting for biases in the polls, an advantage of using a statistical model is that it quantifies our uncertainty in the result. The grey shading in the chart above indicates a 95% “credible interval” – an interval in which we are 95% certain the true voting intention lies. Although the latest estimate of the Remain vote share is above 50%, a reasonable portion of the credible interval is below the 50% line, showing that we still believe that a Leave result is possible. In fact, we calculate that if there was a snap referendum today there would be an 86% chance of a remain result versus a 14% chance of a leave result. result_probs_todayfinalIt is important to realise that these very favourable odds for a remain outcome only reflect what would happen if a hypothetical referendum was held today. There is just under five weeks to go until the real referendum on June 23rd, and our uncertainty grows as we project forward into the future. This means that our current best estimate is that the probability that the result is to remain in the EU is 68%. This prediction for the actual outcome combines the prediction for a snap referendum today with the tendency for opinion to change over time as each side produces new arguments. (See our referendum update page for the predictions, based on the latest polls.)result_probs_23JunefinalAs well as predicting the result, we can also use our model to examine the differences between polling companies and polling methods. In the plot below, which shows the estimated pollster effects, we can see that there is a seven percentage point spread between the company whose polls most favour remain (Ipsos MORI) and the company whose polls most favour leave (Panelbase). Note that these are the effects estimated whilst simultaneously accounting for differences between the polling methods, so they can be interpreted as if all the companies had used the same polling method.

house_effect_final

The estimated pollster effects, expressed as the percentage points in favour of remain compared to the overall average.

As we noted above, there is a large difference between the results of telephone and online polls. This phenomenon was explored in a recent study by the pollster Populus, in which experiments were conducted that point to two causes of the discrepancy. First, online polls tend to offer an explicit “undecided” option, whereas telephone polls often do not and record instead only unprompted responses of “undecided”. We have accounted for this difference in our model by including as a covariate the percentage of “undecided” responses, which tends to be lower in telephone polls. We find that polls with a large percentage of undecideds favour the Remain camp – for example, polls with more than 16% undecideds are estimated to increase the remain lead by two percentage points. This suggests that when undecided voters are pressed to make a decision they are more likely to opt for leave than a randomly selected person.

Different polling methods tend to draw samples of people that differ according to their social attitudes.

Even after we account for this undecided effect our model shows a substantial difference between the polling methods, with telephone polls giving the Remain vote share an additional four percentage points compared to online polls. The reason suggested by the Populus study for this residual difference is that the two modes of polling tend to draw samples of people that differ according to their social attitudes. Pollsters usually apply weights to correct for imbalances in demographics (e.g. age and education), but this does not account for broader social attitudes to issues such as gender, racial equality and national identity. Because of differences in the ways in which responses are gathered between the two methods, telephone polls tend to make people look too socially liberal and online polls tend to make people look too socially conservative. These social attitudes are correlated with feelings towards the EU, explaining the discrepancy.

To sum up, our model estimates voting intentions by aggregating polls while correcting for various sources of bias. However, we should point out that we have made a number of assumptions that will affect the results. We’ve assumed that the biases work independently – for example, that the difference between telephone and online polls is the same for all polling companies. Moreover, we have assumed that across all companies the polls are unbiased, and similarly that across both methods (telephone and internet) the polls are unbiased. This does not account for the possibility of a systematic bias across all pollsters or polling methods. We ignore this possibility at our peril, as we saw when the opinion polls for the 2015 general election systematically underrepresented key groups of voters. Had we applied our model to those data, we would have made the same mistake as everybody else in underestimating the lead of the Conservatives over Labour as we have no mechanism (or, indeed, data) for correcting for an overall bias. We can hope that in the wake of the British Polling Council’s inquiry into the causes of the polls’ failure in 2015 the polling companies have improved their methods and reduced the bias. However, when relying on small polls that lack true randomisation we will always have to treat the answer with considerable caution, even when we use sophisticated statistical methods to aggregate the results.