How did the pollsters get the main prediction of the UK General Election wrong? – although they got a number of things exactly right e.g. collapse of the Lib Dem’s, UKIP’s 12% vote share and the SNP dominance in Scotland. Here Peter Kellner – President of YouGov offers his views.
We got it wrong. Why?
The signs were there. Indeed, they were widely reported: how David Cameron trounced Ed Miliband as the public’s preferred Prime Minister; how economic optimism was returning, and reinforcing the Conservatives’ reputation. One YouGov poll five weeks ago generated the Sunday Times front page headline: “Tories best for workers, say voters”.
And, to a degree, I pointed out these signs, predicting week after week that the Tories would gain votes as election day approached. On the final Sunday of the campaign, I forecast that the Conservatives would beat Labour by 35-32% – not far from the 37-31% result.
Indeed, if I wanted, I could construct a defensive barricade against criticisms of YouGov and other pollsters. I could point out that our final poll, showing 34-34%, was only three points out per party, and so within the bounds of normal sampling error, and that we correctly foretold the SNP tsunami in Scotland and Labour’s big gains in London.
That, though, would be to evade the truth and insult the readers of this blog. We got the election wrong. So did the other ten polling companies who produced eve-of-election voting intentions: we all said the race was too-close-to-call. Only by admitting that we are all at fault can we start the journey to finding out why.
That journey’s first stop is 1992. The final polls predicted a Labour victory. Instead, John Major’s Conservatives won more votes than any party in British history – a record that still stands. At the time, I was a journalist, not a pollster. I merely reported their findings. Even so, I felt I was implicated in their error. I came to the conclusion that this error had three components: a very late swing to the Conservatives; “shy Tories” who did not want to admit how they would vote; and, biggest of all, systematic sampling errors.
The last matters for, although things have changed, one of the questions after last week is plainly whether the polls have, once again, screwed up their sampling methods.
In retrospect, the 1992 election came just a few weeks too soon for the pollsters. Shortly after Major’s victory, the key findings of the 1991 census were published. These showed that Britain had changed since the previous, 1981, census far more than anybody – well, any social scientist – had realised.
In that Thatcher decade, there had been a sharp reduction in the Labour-inclined working classes and faster growth in the mainly-Tory middle classes. Pollsters all used much the same sampling design, and failed to take adequate account of all this. Had they known what was in the 1991 census, their election figures would not have led them, or us, so far astray. (In retrospect, the 1987 election should have flashed warning signals. The Eighties economic and social revolution was under way, and most polls understated Tory support, probably because the pollsters’ social mix had started to diverge from reality.)
Since then, all pollsters have spruced up the way they match their samples to the general population. Our methods slightly vary – one of the problems in 1992 was they were virtually identical, and so identically wrong. Obviously, the independent inquiry that the British Polling Council has established into last week’s errors must revisit this matter. Maybe our various sample designs have somehow led us to make the same mistakes. But I would be surprised if that turns out to be a major factor.
The second possibility is that there was, indeed, a last-minute shift to the Tories, just as I had predicted in these pages. I wish I could say this was so and that I was right all along, sort of. But I can’t. On Thursday, YouGov repeated an exercise that had enabled us to predict accurately the final outcome of the Scottish referendum last September. Then we found there was, indeed, an on-the-day shift that added two per cent to the “no” vote. Our final prediction, on the night of the referendum, was that “no” would win by 54-46% – just one point adrift of the 55-45% result.
Last Thursday we reinterviewed 6,000 people that we had polled online earlier in the week. We found that 5% changed their vote. However, each party gained and lost support in equal measure. The overall Conservative-Labour division remained 34-34%. So late swing doesn’t seem to explain what happened.
Some people have suggested that the problem lies with online research. Naturally I have a vested interest in challenging this view. I make just two points in response. The first is that YouGov’s online methods have outperformed telephone polls in the past. One example was last May’s European Parliament elections; another was the London mayoral election in 2008. All the telephone polls showed Ken Livingstone ahead. YouGov’s final poll said Boris Johnson would win by 53-47%, Boris won, and by precisely the margin we predicted.
In any event, whatever caused last week’s errors was no respecter of polling methods. Online and telephone polls were equally wide of the mark.
Which brings us back to “shy Tories”. Were there people who decided to vote Conservative, while telling pollsters they would vote for a change of government? This seems to be the likeliest explanation, but it does create difficulties for online companies. This is because the conventional theory suggests that the problem lies with interviewer-administered surveys. A small minority of voters, so the argument runs, don’t want to admit to a stranger that they hold a view that they fear might cause offence.
In America this is known as “social satisficing”. There is a mountain of evidence that some questions are best asked wholly anonymously, without a human interviewer. Examples include sex, drugs and greed. One of virtues of online research is, or should be, that it allows respondents to submit their views with complete anonymity, as there is no stranger watching over them or listening to their answer.
So, if we were all – online and telephone pollsters alike – tripped up by “shy Tories”, how come? And how was it that the exit poll conducted for the BBC, ITV and Sky News came so much closer to the result?
This is likely to be one line of investigation for the BPC inquiry. Obviously I can’t be certain what it will find: if I was, I hope I would have acted to avoid the error in the first place. Here, though, is my initial, tentative suggestion.
It comes down to human psychology. Voting is a different exercise from answering a poll. It is a choice with consequences, not just an expression of a view. This year, as in 1992, the Tories have a weak image. They are widely thought to be out of touch and for the rich. But, at the margin, there may be some people who both have a poor view of the party but nevertheless think it will run the economy better than Labour. They are “shy Tories” not because they are unwilling to admit their choice of party to a stranger but because they really would like to support someone else but, faced with a ballot paper in the privacy of the polling booth, simply can’t.
Last Thursday 22,000 voters responded to the broadcasters’ exit poll. They were given a replica ballot paper and asked to complete it, fold the paper and drop it in a replica ballot box. Most “shy Tories” were unwilling to “lie” at that moment by putting their cross against a different candidate; so the exit poll came closer to the result.
However, I suspect that even here, a few people did change their “vote”. After all, although the exit poll did well, it still understated the Conservative performance – just as the 1992 exit poll had done.
Either way, when voters got back home, they were back in “polling” rather than “voting” mode, and 3% of them reverted to their wish to express their dislike of the Tories. It’s a small number, relating to a specific event that occurs only sporadically. But in our voting system, such small numbers make a huge difference to political outcomes. In most of Europe, a three point error on a party winning 37% would make far less difference: the “winning” party would still need to build a coalition to govern the country.
Of course, this would all be irrelevant if polls were banned in the final stages of elections, as some people have proposed. From a commercial point of view, I would be delighted. France used to have a seven-day ban on published polls. But the law can’t easily outlaw the asking of questions, only the reporting of results. So French pollsters used to charge a fortune for private polls for rich clients who wanted information that the wider public was denied.
So it’s not to protect YouGov’s revenues that I oppose a ban. My objection is more fundamental. Polls, for all their faults, provide the most scientific way to listen to what voters think. Democracy needs communication between politicians and voters. There are many channels of such communication. Without polls, people with bad views and deep pockets would have one less check on their malevolence; and politicians would be more likely to exaggerate support for their views. To adapt what Churchill famously said about democracy, opinion polls are the worst way to find out what voters think – except for all the others.