Tuesday, October 7, 2014

Poll of Polls update, and a response to "Alligator Man"

Since the last Poll of Polls update, there have been four new Scottish subsamples published, all of which have shown the SNP in the lead by varying degrees.  However, that's had a negligible effect on the rolling average, other than the fact that UKIP have overtaken the Liberal Democrats.  The latest update is based on eight polls - one full-scale Scottish poll from Panelbase, four YouGov subsamples, two Populus subsamples and one Ashcroft subsample.  I still can't provide a figure for the Greens, because it was absent from the Panelbase datasets.

Scottish voting intentions for the May 2015 UK general election :

SNP 35.8% (+0.2)
Labour 31.4% (+0.1)
Conservatives 17.7% (+0.5)
UKIP 5.5% (+0.1)
Liberal Democrats 5.1% (-0.7)

(The Poll of Polls uses the Scottish subsamples from all GB-wide polls that have been conducted entirely within the last seven days and for which datasets have been provided, and also all full-scale Scottish polls that have been conducted at least partly within the last seven days. Full-scale polls are given ten times the weighting of subsamples.)

I've been struck over the last couple of weeks by the number of people who have said some kind of variation on the following : "The referendum taught me that opinion polls are generally pretty accurate."  Which always leaves a part of me thinking - did you really need the referendum to tell you that?  As I discussed in a lengthy post back in April, polls in the western world have a long track record of being reasonably accurate, usually at least to within a few percentage points.  That doesn't mean we should treat them as a God, or not watch like a hawk for any flaws in their methodologies that might lead them to be less accurate than usual, but the huge number of people who prior to the referendum were chanting the mantras "I never look at the polls, they're all rubbish" and "the only poll that matters is on September the 18th" could have saved themselves a big shock if they'd just taken a sober look at the reasonably healthy success rate of polling firms in the past.  There was a kind of mythology doing the rounds that the 2011 election somehow proved that polls were completely useless, but in fact (with the important exception of YouGov) most firms were fairly close to the mark with their final call in 2011, albeit only on the constituency vote.

I've also noticed a few people in the comments section of this blog saying that they don't want to listen to any suggestions that support for the SNP might be slightly understated in the current polls, because similar suggestions about the Yes vote prior to the referendum proved to be wrong.  To a limited extent that chimes with the remarks of a certain Mr Neil Edward Lovatt, a self-styled hot-shot "risk assessor" who famously couldn't even assess the risk of someone being attacked by alligators in Ireland.  I had to mute him on Twitter a few weeks before the referendum because his personalised trolling campaign was wasting far too much of my time, so I haven't been following what he's said about me recently, but from replies that I've seen other people send to him, it's pretty obvious that he's developed a weird obsession with trying to trash my reputation as a "pundit" - on the basis that my "predictions" were supposedly proved wrong.  Rather pathetically, I saw a Yes supporter gushing about Lovatt's genius on the day after polling : "Oh, I should have listened to you all along, it wasn't what I wanted to hear, but your analysis was always bang on the money!"

You see, unlike me, Lovatt did actually make predictions about the referendum - and he did it based not on polls, but on the "odds market", ie. by looking at movements on the betting exchanges.  When challenged about the shortcomings of this rather dubious approach, he repeatedly asserted that the odds market is an infallible predictor (it seems that, unlike the polls, the markets really are a God).  He failed to provide any evidence for this extraordinary claim, and instead rubbished anyone questioning the truth of what he was saying as an idiot who self-evidently didn't know the first thing about the subject.

Unfortunately for Lovatt, although I'm not really a gambling man, I was a regular for several years on Political Betting, so I do actually have a reasonably good grounding in the betting markets and what can shift them.  The one narrow sense in which he's right is that the odds market can sometimes offer the earliest indication of the results of an embargoed poll, because those who have been given sight of it in advance might use their knowledge to make a profit.  (A similar example is that everyone knew that Matt Smith had been cast as Doctor Who several hours before the announcement, because he came out of absolutely nowhere to become the bookies' favourite.)  But the operative word is 'can'.  Political betting is particularly prone to snowball effects - punters are on the constant lookout (just like Lovatt) for movements in the markets that might be caused by bets from people with inside information, and if they think they spot a clue, they're likely to pile in very quickly, thus moving the markets even further in the same direction.  So you can end up with dramatic and seemingly significant shifts based on nothing more than guesswork and a herd-like instinct.

This is an even greater problem when you're trying to use the odds market to predict the result of an actual referendum, rather than merely a poll.  Even if you spot something that you think might be an indication of inside information, just how much use is that inside information anyway?  Nobody literally knew the referendum result in advance - at best they would have had knowledge of private polling, canvass returns and postal vote sampling.  Given that the public polls were tightly bunched together in the lead-up to polling day, it's highly unlikely that the private polls were showing anything different.  Canvass returns carry a huge health warning, because people often tell canvassers what they want to hear, so anyone reading too much into Better Together's collated figures would have been very foolish.  And postal vote sampling would have been of limited use, because it was always speculated that postal voters were disproportionately likely to be in the No column for demographic reasons.  So if there were any 'clues' to be found in the betting markets in the days leading up to polling, they were coming from people who were getting carried away with themselves, and who thought they knew far more than they actually did.

And then, most importantly of all, there's the Scottish factor.  For obvious reasons, UK betting markets are more likely to be accurate when 'on the ground' information is equally available to punters throughout the UK.  That simply isn't the case in a Scottish-only vote, because Scotland has less than 9% of the UK population.  We saw a huge split in where the bets were going in this referendum, with Scottish punters overwhelmingly backing Yes, and punters in the rest of the UK backing No.  Because of the disparity in population, that led to the odds reflecting what the less-well-informed non-Scottish punters were doing.  Just to be clear, people in Scotland weren't backing Yes because they necessarily thought Yes would win - instead they were concluding that it was the value bet because the probability of Yes winning was significantly higher than the odds suggested.

The most extreme example of the Scottish factor in action came in 2007 - and I know that Lovatt is completely unaware of this, because he didn't have a clue what I was talking about when I raised it with him.  As you'll recall, the final results from that year's Holyrood election didn't emerge until 6pm on the day after polling, due to catastrophic technical problems with the counting machines.  Throughout most of the intervening period, the betting markets remained open.  They showed Labour with a greater than 90% chance of winning, and continued to do so several hours after BBC Scotland's Brian Taylor had publicly announced that the running tallies suggested the SNP were going to sneak it.  Punters with London-centric assumptions about where to look for clues were missing what was right under their noses.  It was an absolutely bizarre spectacle, and one that should completely destroy any notion that the odds market can be used as a reliable predictor of Scottish elections.

Yes, on this particular occasion, the odds market successfully 'predicted' the result of a two-horse race.  But then it would have had a 50% chance of successfully 'predicting' the result of a coin toss.

So much for Lovatt's claim to have uncannily predicted the result with his seer-like talents.  But what about me, and other regulars on this blog?  Well, as already noted, I was very circumspect, and never made any sort of prediction at all.  (Yes, my headlines were often apocalyptic in tone, but as most of you noticed they were intended as ironic tributes to the mainstream media's poll-related headlines.)  What I and others did was point out that this referendum posed unusual challenges, which made it particularly difficult for pollsters to 'work backwards' to ensure their methodology was right.  We raised legitimate questions about the accuracy of the three No-friendly pollsters, namely YouGov, TNS and Ipsos-Mori.  We speculated about the reasons why they were showing a significantly lower Yes vote than the other three firms - a highly unusual disparity which meant by definition that at least one group of firms was getting it completely wrong.  With YouGov, we attacked the logical basis for the so-called "Kellner Correction", and with Ipsos-Mori we were concerned about the reliability of landline-only polling.

After the polls dramatically converged a couple of weeks before polling, there was no longer any rational reason to suppose that some of the polls might be understating the Yes vote by an extreme amount, because all methodologies were leading to roughly the same conclusion.  However, it was still possible that a systemic across-the-board problem was leading to the polls being slightly off-mark in either direction.  There was just as much of a chance that they were overestimating Yes as that they were overestimating No, but nevertheless with the average Yes vote standing on 48% or 49% in the run-up to polling day, that was the basis on which I said that Yes had a real chance of winning - and, oddly enough, in saying that I found myself in complete agreement with both Peter Kellner and Anthony Wells.  As it turned out, it looks like there was indeed a small systemic problem with the polls, and that they were overestimating the Yes vote by a smidgeon (even the YouGov exit poll had Yes on 46%).

But what we still don't know is whether the No-friendly or the Yes-friendly pollsters were closer to the truth prior to the Great Convergence.  It would help enormously if YouGov retrospectively published their secret Kellner Correction figures, because then we would see whether the apparent late surge for Yes was heavily concentrated among the small group of respondents who were always drastically upweighted by the correction.  In the meantime, I would suggest that it's extremely difficult to sustain the argument that those of us who cast doubt on the accuracy of the No-friendly firms have been proved wrong.  Just a few weeks before polling, YouGov were showing Yes on 39-40%, and had never shown a higher figure than 42%.  Peter Kellner was busily telling us that it was even worse than that, because Don't Knows could always be expected to break for No - and yet Yes ended up with 45%.  Very similar patterns were seen with Ipsos-Mori and TNS.

And what does this tell us about the potential accuracy of Scottish voting intention polling for the general election?  Not a lot, because at the moment there's no obvious divergence between firms.  But as I observed the other day, the point I raised about the full-scale Panelbase poll potentially underestimating the SNP vote for Westminster is a completely new issue, because the firm are using a weighting procedure that they didn't employ for any of their referendum polls, and one that is pretty much discredited throughout the whole industry.


  1. Another example of what you are talking about re: betting markets is the odds for the next manager of a football team (club or international). You often get dramatic shifts in the odds that reflect journalist gossip and it comes to nothing, either because the original rumour was bollocks or the deal can't be concluded for some reason (money, bad interview). The odds at the start of a manager appointment process very rarely reflect the actual probability of the people listed being appointed.

  2. Couldn't agree more about the polls.

    You didn't see it so much on here for obvious reasons, but Facebook etc were full of nutters predicting 60/40, 70/30 for Yes, and that the poll figures were either total guesswork, or actively made up for propaganda purposes. I expect there's a significant overlap between these people and those who believe the referendum result was wholly fraudulent.

  3. Another point about the betting odds is the possibility that money was being put on No precisely to skew the odds. Of course it can't be proven, but when bets of £800,000 are being reported on one side of a binary choice which is clearly not a racing certainty and the odds are extremely poor, one may at least be suspicious.

  4. "And postal vote sampling would have been of limited use, because it was always speculated that postal voters were disproportionately likely to be in the No column for demographic reasons." Not really. The campaigns know, if they wanted to find out, the detailed demographics of the postal vote population. With the estimates of postal vote breakdowns by constituency and the demographic breakdown of postal votes in each constituency (gender, SIMD, even age) and the demographics of the national population, either campaigbn should have been able to predict the result. The unknowable in this is due to temporal swings not demographics.

  5. "either campaign should have been able to predict the result"

    Only up to a point - the disparity between the Yes/No breakdowns in different demographic groups could only be very roughly estimated, so any extrapolation would have been highly speculative - and in any case would (as you hint in your final sentence) have been conditional on there being no swing after postal voting got underway.

  6. James, one thing that has been bugging me, you might not know, but why were the private polling of both campaigns saying yes were in the lead, when the public polling wasn't?

    Were they polling a bigger amount of people?

  7. Chalks : All we know for sure (or more or less for sure) is that there was one private poll for the No campaign that gave Yes a 53-47 lead, and that was what led to the blind panic, the shock and awe campaign, and ultimately The Vow. The impression I get is that this happened at around about the same time as the YouGov poll put Yes 51-49 in the lead, so it wasn't necessarily out of line with what the public polls were showing.

    As for the Canadian research for the Yes campaign that gave Yes a 54-46 lead, I'm not sure whether that was conventional polling or not. I'd be interested to know which methods they used.

  8. http://www.callfirstcontact.com/

    Looks like a mixture of private polling and the nationbuilder/activate system....obviously analysed the swing to yes that occured but and this is a guess, they wouldn't have had the time to pick up on the loss of confidence on the doorsteps after the vow, as canvassers were concentrating on undecideds favouring yes on a scale of 7,8,9....not nailed on yessers.

    We all know yes voters that voted no and people leaning to yes that voted no. It just wasn't possible to updae this information, that's my logical assumption as to why yes thought they had it....not enough time to work out that people had moved

  9. All quite strange. A 'correction' being used? Methodology not released. If the polls are 'nearly' right on the day before and 'very nearly' right the day after what use are they? The referendum was won with turnout. If Glasgow had secured 100% turnout how much closer might it have been? In Inverclyde it was less than 100 votes and STILL folk complained the next day despite not having bothered to vote. So worried by this were local authorities they actually PUBLICISED the fact they'd use the new voters info to check for old debts, having ALREADY written out to registered voters telling them they're Council Tax info didn't match their address. In the face of this I'm surprised anyone voted. A case of 'if you can't win, disenfranchise the voters' as you could argue they do in the USA. The YES parties will have to campaign as hard for voter registration for the next election. The NO vote got 55.3% of an 84% who turned out of the 97% who registered to vote. So that's 46% of all of those who could've registered then voted! Its food for thought..

  10. Really interesting reading.

    Chalks, interested in yes voters and yes leaning voters that voted no, I have not met anyone who did that.

    We live in a vast land so I'm not doubting at all, but curious to hear their reasons. Did "the vow" swing it for them?

    I know of 3 no/on the fence voters who on polling day said ''Ah flip it"and went yes

  11. Slightly off topic but are you going to analyze the latest YouGov reaearch of which subgroups voted which way? For example this blows Ashcroft's suggestion (with Mickey Mouse subsample) that the 16-20 yr old's voted Yes overwhelmingly.
    Also can you tell me where the individual referendum constituency voting patterns can be found? I've a number of places quoted so presume this freely available?

  12. Fourfolksache : I didn't go into it in huge detail, but I did write a post comparing the age-related findings of Ashcroft and YouGov. I pointed out that they both agree that a majority of under-55s voted Yes (the only way it could be argued that YouGov don't show that is by assuming that 55-59 year olds broke for Yes, which seems unlikely). Ashcroft's 'Mickey Mouse' subsample was actually for 16 and 17 year olds only - if you lump them in with 18-24 year olds, it's a perfectly normal sample size.

    As far as I'm aware, constituency results have only been released at the discretion of councils.

  13. "There was a kind of mythology doing the rounds that the 2011 election somehow proved that polls were completely useless, but in fact (with the important exception of YouGov) most firms were fairly close to the mark with their final call in 2011, albeit only on the constituency vote."

    Ah but, their final call came within a few days of the actual vote when everything was becoming polarised. Weeks and months out the polls were no more than guesses.

    James I do wish you would stop bigging up the pollsters. We KNOW they give "reasonable results within a few % points of the actual result". But in most votes a "few % points" are crucial to the outcome.

  14. "Weeks and months out the polls were no more than guesses."

    They weren't guesses - they were snapshots of opinion, based on (for the most part) the same methodology used in the later polls. The alternative interpretation is the one put forward by Scottish Skier, who thinks that people weren't being honest with the pollsters until a very late stage. But either way, the earlier polls were still scientifically conducted, and weren't guesswork.

    And I'm scarcely one for bigging up the pollsters - I spend half my life criticising them, and I've done so again in this very post.

  15. James, I think Fourfolksache is referring to the YouGov research publicised by Ailsa Henderson (Edinburgh Uni). It found that each age bracket up to 50 (16-29, 30-39 and 40-49) supported independence, but the over 70s in particular were very heavy against. It also found that within the 16-29 group the majority for independence pretty much all came from 25-29 y/o; 16 & 17s were slightly against and 18-24 were slight in favour.

    She believes that if the campaign had gone for another week the outcome would have been closer, because the economic arguments for no were losing power as it went on.

    12 minute youtube video in this article:


  16. This was a more difficult poll than usual

    High Turnout,16,17 year olds,1st time Scotland's been.asked

    45% YES, is solid ground for a federal Union at least

  17. We had a 10% lead in eve of vote polling and the BritNats ended up with a poll lead of 10% - utter B.S. - they stole the vote and it was a 3 pronged attack done like this :

    a) MI5 hacked into all 32 local authority electoral registries and added the necessary many many thousands of phoney names ad addressses to the roles and then downloaded the postal votes, signed them with phoney signatures and sent them off.

    b) BritNat Labourites did their usual of collecting all the postal votes in old folks homes and deprived areas and chucked away the YESs in the bin.

    c) BritNats (especially English resident BritNats) registered aunties, uncles cousins etc as voters and gave them a NO postal/proxy vote.

    We lost this before the first vote was cast.

  18. James, please delete drivel like the comment at 21:46.

  19. "But either way, the earlier polls were still scientifically conducted, and weren't guesswork."

    Alright I'll grant you they were conducted scientifically. But the results which were obtained were little better than would be expected from guesswork. RIRO.

    And explain please how 3 Indy Ref polls by YOUGOV within a week gave such different answers. Covering their asses comes to mind?

  20. "But the results which were obtained were little better than would be expected from guesswork."

    Why do you say that?

    "And explain please how 3 Indy Ref polls by YOUGOV within a week gave such different answers."

    Which ones? There was the sudden surge, which may or may not be plausible, but after that all the results were within the margin of error.

  21. If you are now claiming polls are 'correct' when results lie within the window of margin of error shouldn't you and the pollsters show their results with -+3% attached or whatever figure is correct depending on sample size, and presumably other additions for non-statistical errors.

    And I would be grateful if you could clarify for me...do internal bits of poll data when shown broken down in the Tables have the same -+3% margin, or does each bit have a different error margin depending on the bit sample size?

  22. "If you are now claiming polls are 'correct' when results lie within the window of margin of error"

    That's a more complicated issue, and that's not exactly what I am saying. If the true position is 36%, a pollster shouldn't be getting a sequence of results like : 39%, 38%, 39%, 39%, 38%, even those are all strictly speaking within the MOE. But a sequence of 35%, 37%, 39%, 34%, 36% would be perfectly normal.

    Margins of error for subsamples are of course much bigger, and if those subsamples aren't separately weighted then the MOE isn't strictly calculable.

  23. Morn's Yougov:

    47% SNP
    26% Lab
    14% Con
    6% Lib

    RE MoE...

    Folks should note that margin of error is precision, not accuracy.

    So, +/-3% means two polls conducted on two demographically matching groups on the same day using the same methods should yield the same VI answers within +/-3%. That doesn't mean the answers are correct; they could be 10% out. The 10% would be the accuracy error.

    I could have a temperature probe that precisely gives me the boiling point of water as 75 C +/-0.003 ever time. Very precise, but totally inaccurate.

    As there is no way to measure accuracy (we'd need an election for that at least), we're left with precision alone.

  24. Also:

    Thinking about Labour leader Ed Miliband, which of the following qualities do you think he has? Please tick all that apply.

    15% In touch
    15% Honest
    12% Sticks to what believes in
    3% Decisive
    4% Strong
    2% Good in a crisis
    1% A natural leader
    2% charismatic

    64% None of these

    As usual, Dave doing a little better on 58% None. Clegg on 76% None.


  25. Helpful though it is to make the distinction between precision and accuracy...actually it's neither...it's reliability.

  26. Kevin, yes the vow was a reason, 1st time voters or ones who had not voted in a while were swayed by this. They want change but were never fully convinced due to media attacks. Offer them more powers and they will go for that.

    My hone owning yes leaning but voted no mates were swayed by interest rates, which was never fully explained by anyone. We need to get smarter.