Those of you who are a bit older than me will probably recall the very likeable David Butler, who was the BBC's resident psephologist on every general election results programme between 1950 and 1979. (The rest of us can catch up via the wonders of YouTube, although unfortunately no recording exists of 1950 or 1951.) He invented the whole concept of 'swing', and in the days before exit polls was famous for extrapolating the swing from the early declarations to give viewers the first indication of which party was likely to take power.
I was intrigued to spot a lengthy quote from Butler in Antifrank's new article at Stormfront Lite, in which he argues that this year's polling disaster was not some kind of weird exception, and that the polls in fact get it wrong at general elections more often than not. I think his broad point is absolutely right, but he seems to be over-egging the pudding with some of his specific examples...
"In three elections (’45, ’66 and ’97) there was a Labour victory of totally unexpected proportions."
In 1997, the polls actually overestimated Labour's lead, rather than underestimated it. If the scale of Blair's majority took most people by surprise, that was because 1992 was fresh in their minds, and they simply refused to believe the evidence of their own eyes.
In 1945, polling was in its infancy, and the expectations that Churchill would be rewarded for leading Britain through the war had nothing whatever to do with polls. (There was some polling evidence of a handsome Labour lead, but it was largely ignored.)
"In three others (’50, ’64 and October ’74) an expected Labour victory was achieved by only a single-figure margin."
Again, it's doubtful that the expectations of a solid Labour majority in 1950 had much to do with opinion polls.
"And in two elections (February ’74 and 2010) there was a hung parliament that few anticipated."
Few anticipated a hung parliament in February 1974, but in 2010 the polls pointed overwhelmingly to that outcome. It's true that the betting odds favoured a Conservative overall majority (and apparently Conservative Central Office shared that view), but that was because Tory-leaning punters thought they knew better than the polls. Ironically, they were expecting a 2015-style outcome five years too early.
So, of the general elections that have occurred since opinion polls started to be taken seriously, which ones can be classed as genuine "shocks"? I'd say there are five -
1970 : This is perhaps the all-time classic, because the polls pointed to a comfortable Labour overall majority, but the outcome was a comfortable Conservative overall majority. There was a much bigger risk of that sort of thing happening back in those days - Britain was almost a pure two-party system (the Ulster Unionists still took the Tory whip, and Liberals and nationalists were very few in numbers). So if the polls were wrong about one party securing a majority, it was fairly likely that the other party would do so.
February 1974 : The polls pointed to a Conservative majority, but Labour emerged by a whisker as the largest single party in a hung parliament, and were able to form a minority government.
October 1974 : The polls pointed to a handsome Labour majority (possibly even a landslide), but in the end Harold Wilson was lucky to barely scrape the tiniest of tiny majorities, which was soon wiped out by defections and by-election defeats.
1992 : The polls pointed to a hung parliament. It wasn't at all clear whether Labour or the Tories would be the largest single party, but the assumption was that a Labour-led government of some description was likely to emerge, because a Tory-Lib Dem deal seemed highly improbable. But the actual outcome was a modest Conservative overall majority.
2015 : Almost an exact repeat performance of 1992, other than the fact that the permutations for the expected hung parliament were much more numerous and complicated. They all proved to be academic as the Tories emerged with a slim outright majority of 12.
So five major shocks in the last twelve elections, which is still a pretty significant proportion. That, of course, is a big part of the reason why "tactical voting on the list" in next year's Holyrood election is such a mug's game. The idea that it's even feasible depends on wildly unrealistic assumptions of extreme polling accuracy. You'd think that people would know better after what happened only five months ago, but apparently not.
* * *
After quoting Butler, Antifrank goes on to make a series of observations about how opinion polls should be sensibly interpreted. You won't be surprised to hear that I disagree with this one -
"Ignore subsamples. They aren’t weighted and the numbers are so small as to be meaningless (often they are actively misleading). Don’t waste your time on them."
If we had ignored the Scottish subsamples from GB-wide polls at this time last year (as, it has to be said, John Curtice studiously did), we'd have been completely unaware that the SNP surge was taking place. They were showing a very clear trend, and we had no other information to go on - apart from a single Panelbase poll that turned out to be slightly dodgy.
Yes, subsamples have to be treated with extreme caution, and individual subsamples can sometimes be worse than useless. But if there is literally no other data out there, aggregates of subsamples are better than nothing, and can at least give you some kind of vague sense of what's going on. We're in a situation like that right now - we've had no full-scale Scottish polls since Jeremy Corbyn became Labour leader, but the subsamples suggest he hasn't made much of an impact north of the border. We'll discover soon enough whether that's misleading.