How much trust should we put in the Opinion Polls?

As we eagerly await the publication on Saturday of the first Lucid Talk poll since May’s Council elections….

There can be few places where such an opening would not be greeted with derision: but I’m hoping that Slugger is not one of them.

But best be on the safe side…..

As most of us eagerly await the publication on Saturday of the first Lucid Talk poll since last May’s Council elections it’s a good time to look back at how the polls performed in the recent election, and how reliable they are normally.

Mark Pack, who has written extensively on opinion polling, compares the job of the pollster that of a tightrope walker – always at risk that the care, skill (and reputation) developed over years of training and practice will be unbalanced by an unexpected gust of wind.

That certainly holds true for polling in Northern Ireland.

In the USA, GB, RoI and elsewhere, there are many polling companies reporting, and with much greater frequency. The most recent polls can be added together to produce an average figure which is likely to be more accurate, and receive more attention, than any individual poll. Here LucidTalk is much more in the spotlight. It is the only company which has polled regularly over several elections and, unlike the recent entrant from Liverpool University, it publishes it tables.

There are two ways of judging a poll.

Did it get the winner(s) right? Did it say Trump would win, or Clinton? Or did it say that the result was too close to call (what the pollsters call within the Margin of Error)?

How close did it get to the actual percentage party votes? So long as it said “Trump to win”, few care if it got the actual Trump vote within 5% or 15%.

I looked at the last poll published before each of the last six elections from each company and considered three factors:

  1. The accuracy of the poll’s measures for the top parties (six in the case of the European election and Westminster 2017)
  2. The accuracy of measures of the three designations
  3. The largest party and designation deviations from the actual result

From these I have made a judgement as to how well the poll anticipated the actual result, the “Did it get the winner(s) right?” question. If, after reading the poll, the result comes as a shock then the poll has failed.

A table with numbers and letters

Description automatically generated

The top parties are DUP, UUP, SF, SDLP and Alliance. For the Westminster 2017 and Euro 2019 I have also included the TUV. The designation totals include the smaller parties and independents. The Liverpool figures for designation deviations in 2023 are estimates.

Assembly 2016

The poll result was very much in line with the actual outcome. Paradoxically so. While the poll did overestimate the UUP vote, transfers and the way that the party’s first preference votes were distributed geographically both combined to award the UUP two more seats than their share would have suggested.

Assembly 2017

Again, the poll was highly reflective of the actual outcome. The underestimate of Sinn Féin vote would have suggested that the party would be likely to win one seat (or just possibly two) fewer than it actually did. One seat out of 90 is an insignificant deviation from the actual result.

Westminster 2017

The UUP vote was overestimated by 5.6%, with the DUP underestimated by 7.1%. Had the UUP achieved that level it is possible that they would have won Fermanagh South Tyrone, and very likely that they would have retained South Antrim. In fact, they lost both. However, it is fair to say that at the time LucidTalk estimated that the poll gave the UUP only 45% chance of retaining FST, but a 55% chance of keeping South Antrim. So, the poll was potentially misleading for 1 seat out of 18.

Council 2019

No poll was conducted for the Council Elections. The preceding LucidTalk poll was three months earlier.

European 2019

LucidTalk conducted two polls in the month before the election, both failed to spot the ‘Alliance surge’. Alliance polled 6.9% ahead of the poll figure, the biggest deviation in any of the elections examined here.

Westminster 2019

Poll result accurate –it got the individual party votes were within margin of error except for Sinn Féin, which was just outside. LT overestimated their vote by 3.2%, but even if SF had achieved the higher figure it would not have changed any of the seat results.

Assembly 2022

A reasonable result for LucidTalk. In terms of the most important outcome of the election, the emergence of Sinn Féin as the largest party and therefore entitled to the First Minister position, the poll got the winner right. So, it passed the key test. It did underestimate the SF vote by 2.6%, only just outside the margin for error. More notably it underestimated the total nationalist vote by 4.4%.

Liverpool also got the main point correct, SF as the largest party. But its prediction that Alliance was running neck-and-neck with the DUP for second place, which would have been as ground-breaking as the SF win, proved to be very wide of the mark.

Council 2023

Strictly speaking LucidTalk did not produce a Council poll. The question it asked was how voters intended to vote in an Assembly election. This means that they could not capture the support for Independents which tends to be higher in Council elections (and thus to supress party figures).

Also, not every party ran in every seat. This probably had the biggest impact on the TUV share. The party failed to stand candidates in many seats where they had received a reasonable vote only 12 months before in the Assembly elections. This meant that any poll measuring shares across all Council areas would be bound to overestimate the TUV share, with knock-on effects on other parties.

I have been unable to find published tables for the Liverpool poll beyond the headline figures in the Irish News, so I do not know the precise question they asked. For the same reason I do not have all of their results.

A graph of a poll

Description automatically generated with medium confidence

Figures to the left are poll underestimates – which means the actual result was higher than the poll predicted. Those to the right are overestimates – the actual result was lower than the poll predicted.

Even though all but one of the individual party estimates correctly fell within the poll’s 2.3% margin for error, for the second consecutive election LucidTalk did not pick up the full scale of the nationalist vote, which it underestimated by 5.1%. As a correlation it overestimated the unionist vote by 4.7%. It still got the major themes largely right – big increases for Sinn Féin, bad losses for SDLP and UUP, and further gains for Alliance.

A graph of a number of people

Description automatically generated with medium confidence

Closer to the mark on the individual party estimates was Liverpool University. Liverpool did measure PBPA and Aontú support, but other than a reference in the Irish News to their figures showing a “marginal drop”, these do not appear to have been published. It seems almost certain that Liverpool also significantly underestimated total nationalists, as well as once again overestimating others.

Sinn Féin emerged from the elections with 20 more councillors than the DUP. Liverpool’s figures would have suggested that a much smaller Sinn Féin lead was the most likely outcome, while Lucid Talk’s results pointed to an even closer outcome with the DUP slightly more likely to finish a nose ahead.

Conclusion

Of the seven elections examined here Lucid Talk proved either a highly or reasonably reliable guide to six of them, while Liverpool University proved reliable for one of the two it polled.

All this discussion does raise the question, “What is the best way to read an opinion poll?” In my view a poll should never be used to answer the question, “What will happen?” The correct question is “What does this poll say could happen?”

To that end before the election, I used the LucidTalk poll as the base from which to analyse which individual council seats parties or independents risked losing, and who might have the chance of benefiting. So, having examined the work of LucidTalk and Liverpool, it seems only fair that I come clean about my own successes and failures.

Of the 462 seats contested 79 changed hands between the parties/independents. Amongst the seats I judged to be at risk I spotted 65 of those that went on to be lost. Amongst those I marked as potential gainers I spotted 62 that were successful. This means that I missed 31 gains or losses out of a total of 158. Of those, 13 involved Independent gains or losses, which no poll is ever going to be able to accurately quantify. The details are as follows:

One question remains. Among these seven elections there is only one example of two consecutive elections showing the same pattern of divergence between poll and outcome – those of 2022 and 2023.

There are several possible explanations:

  • Simple coincidence? This is less likely than it seems. After each election a polling company will reexamine all its processes, in particular the weighting it applies to different categories of respondents, to keep up with any changes in behaviour by any part of the electorate.
  • Some voters changed their minds in the interval between the poll and the election? That gap can be between one and two weeks. At most this could only be a minor part of the explanation.
  • Don’t Knows disproportionately decided to vote Sinn Fein? Again, this could only be a minor contributor.
  • “Shy” Sinn Féin voters? People who have started to vote Sinn Féin in the last two elections, temporarily or permanently, who prefer not to say so. It is hard to see any recent changes in SF’s circumstances which would give rise to this.
  • Differential turnout? There was certainly evidence for this in a number of constituencies (although not all) at the last election. This could have taken two forms. The political environment might have motivated a disproportionately high turnout by nationalist voters and/or a disproportionately lower turnout by unionists. Unfortunately, differential turnout is notoriously difficult for polling companies to deal with. Companies have tried asking people how likely they were to vote, but most in the UK have now concluded that this does not improve the accuracy of their results. Another approach is to examine the socio-economic profiles of those who historically consistently vote and give their answers a higher weighting, but this can backfire if some issue brings more of the low-weighted groups to the polls.

All things considered we are fortunate that LucidTalk provides us with regular polls, and that it has a good record. We must hope that Liverpool can achieve the same standard and begin regularly publishing its tables.


Discover more from Slugger O'Toole

Subscribe to get the latest posts to your email.

We are reader supported. Donate to keep Slugger lit!

For over 20 years, Slugger has been an independent place for debate and new ideas. We have published over 40,000 posts and over one and a half million comments on the site. Each month we have over 70,000 readers. All this we have accomplished with only volunteers we have never had any paid staff.

Slugger does not receive any funding, and we respect our readers, so we will never run intrusive ads or sponsored posts. Instead, we are reader-supported. Help us keep Slugger independent by becoming a friend of Slugger. While we run a tight ship and no one gets paid to write, we need money to help us cover our costs.

If you like what we do, we are asking you to consider giving a monthly donation of any amount, or you can give a one-off donation. Any amount is appreciated.