The polls have shifted no more than usual, but the result may yet be a surprise
One of the many strange things about this volatile campaign is how little it has actually changed most of the fundamentals that held when it started. A modest rise in the polls for the Lib Dems from start to finish is a usual feature of campaigns (up about 4 points) and in 2010 this is what has happened (albeit via a big surge in mid-campaign).
Even looking at the findings about what people think about the party leaders shows a fair amount of continuity despite the first Prime Ministerial debates in a British election. The main change is that more people have a high opinion of Nick Clegg than before, albeit mostly on the softer criteria of ‘charismatic’ (up from 12 per cent to 45 per cent over the campaign) and ‘in touch with ordinary people’ (up from 24 to 37 per cent). Neither Cameron’s nor Brown’s ratings (both pretty poor) moved much, with the biggest change being that more people now consider Brown good in a crisis (up from 18 per cent to 24 per cent). Compared to past movements during campaigns, in favour of Neil Kinnock and John Major, opinion about the Labour and Conservative leaders did not shift.
The post-debate polls, woefully misreported for the most part, confirmed merely that people thought the leader of the party they intended to vote for anyway ‘won’ (whatever ‘won’ means in a debate) but that most people were impressed by Clegg. The debates therefore amplified the usual process of the Lib Dems gaining from equal broadcast time, and compressed it into the few days after the first debate. Despite the media obsession with process, the debates did seem to pique the interest of voters and will have contributed to what seems likely to be a respectable turnout.
The debate polls were an example of how they can be misused, but on a more general level can one trust opinion polls? One of the more foolish objections to opinion polls is that each one only asks about a thousand people for their views. How can that possibly be representative of an electorate of 45 million? The science of statistics has a well established answer. You only need a sample to get the answer right, provided that the sample is representative of the whole. A common analogy is that you can tell how salty a huge vat of soup is by tasting a teaspoonful, provided that the vat has been stirred properly.
However, stirring the soup is an increasingly delicate art. It is remarkable to look back to how opinion polling worked back in the 1960s and 1970s. It was mostly done through face to face interviews, and got the results more or less dead on (with the notable exception of 1970). Despite its unsophisticated methodology, it worked until another surprise election result in 1992 when the polls showed the parties level pegging but the Conservatives were actually clearly ahead when the votes were counted (7.5 per cent). Since then, polling companies have tried ever more sophisticated mechanisms to get representative samples. The obstacles are formidable. Turnout used to be reliably somewhere around 75 per cent, and was also much the same regardless of class or region. Now it varies wildly – 72 per cent in 1997, around 60 per cent in the last couple of elections, and probably higher today. More people vote by post. More people are difficult to reach because they work long hours or live in gated communities. There are more parties in the game. The technology is constantly changing. Pollsters have to re-weight the raw figures to get a representative sample. It is a thing of wonder and beauty that they got it as right as they did in 2005, and that YouGov called the 2008 London election so accurately. But the electorate is a moving target, and at some point the weightings will go wrong. We shall know tomorrow whether the eve-of-election consensus of the polls is right or not.