Since the last Poll of Polls update, there have been four new Scottish subsamples published, all of which have shown the SNP in the lead by varying degrees. However, that's had a negligible effect on the rolling average, other than the fact that UKIP have overtaken the Liberal Democrats. The latest update is based on eight polls - one full-scale Scottish poll from Panelbase, four YouGov subsamples, two Populus subsamples and one Ashcroft subsample. I still can't provide a figure for the Greens, because it was absent from the Panelbase datasets.
Scottish voting intentions for the May 2015 UK general election :
SNP 35.8% (+0.2)
Labour 31.4% (+0.1)
Conservatives 17.7% (+0.5)
UKIP 5.5% (+0.1)
Liberal Democrats 5.1% (-0.7)
(The Poll of Polls uses the Scottish subsamples from all GB-wide polls that have been conducted entirely within the last seven days and for which datasets have been provided, and also all full-scale Scottish polls that have been conducted at least partly within the last seven days. Full-scale polls are given ten times the weighting of subsamples.)
I've been struck over the last couple of weeks by the number of people who have said some kind of variation on the following : "The referendum taught me that opinion polls are generally pretty accurate." Which always leaves a part of me thinking - did you really need the referendum to tell you that? As I discussed in a lengthy post back in April, polls in the western world have a long track record of being reasonably accurate, usually at least to within a few percentage points. That doesn't mean we should treat them as a God, or not watch like a hawk for any flaws in their methodologies that might lead them to be less accurate than usual, but the huge number of people who prior to the referendum were chanting the mantras "I never look at the polls, they're all rubbish" and "the only poll that matters is on September the 18th" could have saved themselves a big shock if they'd just taken a sober look at the reasonably healthy success rate of polling firms in the past. There was a kind of mythology doing the rounds that the 2011 election somehow proved that polls were completely useless, but in fact (with the important exception of YouGov) most firms were fairly close to the mark with their final call in 2011, albeit only on the constituency vote.
I've also noticed a few people in the comments section of this blog saying that they don't want to listen to any suggestions that support for the SNP might be slightly understated in the current polls, because similar suggestions about the Yes vote prior to the referendum proved to be wrong. To a limited extent that chimes with the remarks of a certain Mr Neil Edward Lovatt, a self-styled hot-shot "risk assessor" who famously couldn't even assess the risk of someone being attacked by alligators in Ireland. I had to mute him on Twitter a few weeks before the referendum because his personalised trolling campaign was wasting far too much of my time, so I haven't been following what he's said about me recently, but from replies that I've seen other people send to him, it's pretty obvious that he's developed a weird obsession with trying to trash my reputation as a "pundit" - on the basis that my "predictions" were supposedly proved wrong. Rather pathetically, I saw a Yes supporter gushing about Lovatt's genius on the day after polling : "Oh, I should have listened to you all along, it wasn't what I wanted to hear, but your analysis was always bang on the money!"
You see, unlike me, Lovatt did actually make predictions about the referendum - and he did it based not on polls, but on the "odds market", ie. by looking at movements on the betting exchanges. When challenged about the shortcomings of this rather dubious approach, he repeatedly asserted that the odds market is an infallible predictor (it seems that, unlike the polls, the markets really are a God). He failed to provide any evidence for this extraordinary claim, and instead rubbished anyone questioning the truth of what he was saying as an idiot who self-evidently didn't know the first thing about the subject.
Unfortunately for Lovatt, although I'm not really a gambling man, I was a regular for several years on Political Betting, so I do actually have a reasonably good grounding in the betting markets and what can shift them. The one narrow sense in which he's right is that the odds market can sometimes offer the earliest indication of the results of an embargoed poll, because those who have been given sight of it in advance might use their knowledge to make a profit. (A similar example is that everyone knew that Matt Smith had been cast as Doctor Who several hours before the announcement, because he came out of absolutely nowhere to become the bookies' favourite.) But the operative word is 'can'. Political betting is particularly prone to snowball effects - punters are on the constant lookout (just like Lovatt) for movements in the markets that might be caused by bets from people with inside information, and if they think they spot a clue, they're likely to pile in very quickly, thus moving the markets even further in the same direction. So you can end up with dramatic and seemingly significant shifts based on nothing more than guesswork and a herd-like instinct.
This is an even greater problem when you're trying to use the odds market to predict the result of an actual referendum, rather than merely a poll. Even if you spot something that you think might be an indication of inside information, just how much use is that inside information anyway? Nobody literally knew the referendum result in advance - at best they would have had knowledge of private polling, canvass returns and postal vote sampling. Given that the public polls were tightly bunched together in the lead-up to polling day, it's highly unlikely that the private polls were showing anything different. Canvass returns carry a huge health warning, because people often tell canvassers what they want to hear, so anyone reading too much into Better Together's collated figures would have been very foolish. And postal vote sampling would have been of limited use, because it was always speculated that postal voters were disproportionately likely to be in the No column for demographic reasons. So if there were any 'clues' to be found in the betting markets in the days leading up to polling, they were coming from people who were getting carried away with themselves, and who thought they knew far more than they actually did.
And then, most importantly of all, there's the Scottish factor. For obvious reasons, UK betting markets are more likely to be accurate when 'on the ground' information is equally available to punters throughout the UK. That simply isn't the case in a Scottish-only vote, because Scotland has less than 9% of the UK population. We saw a huge split in where the bets were going in this referendum, with Scottish punters overwhelmingly backing Yes, and punters in the rest of the UK backing No. Because of the disparity in population, that led to the odds reflecting what the less-well-informed non-Scottish punters were doing. Just to be clear, people in Scotland weren't backing Yes because they necessarily thought Yes would win - instead they were concluding that it was the value bet because the probability of Yes winning was significantly higher than the odds suggested.
The most extreme example of the Scottish factor in action came in 2007 - and I know that Lovatt is completely unaware of this, because he didn't have a clue what I was talking about when I raised it with him. As you'll recall, the final results from that year's Holyrood election didn't emerge until 6pm on the day after polling, due to catastrophic technical problems with the counting machines. Throughout most of the intervening period, the betting markets remained open. They showed Labour with a greater than 90% chance of winning, and continued to do so several hours after BBC Scotland's Brian Taylor had publicly announced that the running tallies suggested the SNP were going to sneak it. Punters with London-centric assumptions about where to look for clues were missing what was right under their noses. It was an absolutely bizarre spectacle, and one that should completely destroy any notion that the odds market can be used as a reliable predictor of Scottish elections.
Yes, on this particular occasion, the odds market successfully 'predicted' the result of a two-horse race. But then it would have had a 50% chance of successfully 'predicting' the result of a coin toss.
So much for Lovatt's claim to have uncannily predicted the result with his seer-like talents. But what about me, and other regulars on this blog? Well, as already noted, I was very circumspect, and never made any sort of prediction at all. (Yes, my headlines were often apocalyptic in tone, but as most of you noticed they were intended as ironic tributes to the mainstream media's poll-related headlines.) What I and others did was point out that this referendum posed unusual challenges, which made it particularly difficult for pollsters to 'work backwards' to ensure their methodology was right. We raised legitimate questions about the accuracy of the three No-friendly pollsters, namely YouGov, TNS and Ipsos-Mori. We speculated about the reasons why they were showing a significantly lower Yes vote than the other three firms - a highly unusual disparity which meant by definition that at least one group of firms was getting it completely wrong. With YouGov, we attacked the logical basis for the so-called "Kellner Correction", and with Ipsos-Mori we were concerned about the reliability of landline-only polling.
After the polls dramatically converged a couple of weeks before polling, there was no longer any rational reason to suppose that some of the polls might be understating the Yes vote by an extreme amount, because all methodologies were leading to roughly the same conclusion. However, it was still possible that a systemic across-the-board problem was leading to the polls being slightly off-mark in either direction. There was just as much of a chance that they were overestimating Yes as that they were overestimating No, but nevertheless with the average Yes vote standing on 48% or 49% in the run-up to polling day, that was the basis on which I said that Yes had a real chance of winning - and, oddly enough, in saying that I found myself in complete agreement with both Peter Kellner and Anthony Wells. As it turned out, it looks like there was indeed a small systemic problem with the polls, and that they were overestimating the Yes vote by a smidgeon (even the YouGov exit poll had Yes on 46%).
But what we still don't know is whether the No-friendly or the Yes-friendly pollsters were closer to the truth prior to the Great Convergence. It would help enormously if YouGov retrospectively published their secret Kellner Correction figures, because then we would see whether the apparent late surge for Yes was heavily concentrated among the small group of respondents who were always drastically upweighted by the correction. In the meantime, I would suggest that it's extremely difficult to sustain the argument that those of us who cast doubt on the accuracy of the No-friendly firms have been proved wrong. Just a few weeks before polling, YouGov were showing Yes on 39-40%, and had never shown a higher figure than 42%. Peter Kellner was busily telling us that it was even worse than that, because Don't Knows could always be expected to break for No - and yet Yes ended up with 45%. Very similar patterns were seen with Ipsos-Mori and TNS.
And what does this tell us about the potential accuracy of Scottish voting intention polling for the general election? Not a lot, because at the moment there's no obvious divergence between firms. But as I observed the other day, the point I raised about the full-scale Panelbase poll potentially underestimating the SNP vote for Westminster is a completely new issue, because the firm are using a weighting procedure that they didn't employ for any of their referendum polls, and one that is pretty much discredited throughout the whole industry.