Tuesday, February 10, 2015

More on the TNS-BMRB poll

I've been reading through some of the comments on UK Polling Report about the new TNS poll, and Roger Mexico (who has occasionally posted here) is largely dismissing the concerns over the unusual weighting procedure that was used.  He's a very intelligent guy, so it's not completely impossible that something is going over my head, but on the face of it his argument seems full of holes to me -

* He points out that the Ashcroft constituency polls used 2010 vote recall weighting only, which ought to be even worse than the combination of 2010 and 2011 vote recall that TNS used - and yet Ashcroft still painted a devastating picture for Labour.  But so what?  Without the dubious downweighting of the SNP vote, the Ashcroft polls would have been even worse for Labour (in all likelihood the SNP would have been ahead in Glasgow North-East along with the other fifteen seats surveyed), and who is to say that wouldn't have been a more accurate finding?  Simply saying "crikey, these numbers look quite bad enough as they are" is scarcely a guarantee of accuracy.

* He claims that we "know" SNP supporters are more eager to respond to polls than others, which will lead to them being over-represented in Ipsos-Mori polls, due to Ipsos-Mori being the only firm that doesn't weight by vote recall at all.  Therefore, at least some of TNS-BMRB's downweighting of the SNP based on vote recall is entirely justified.  But do we really "know" that?  Isn't the "eager nationalist" theory rather contradicted by Ipsos-Mori's status as one of the two most No-friendly pollsters throughout most of the long referendum campaign?  If anything, it looks like the opposite phenomenon may have been occurring for a prolonged period.

* He suggests that TNS-BMRB's decision to ask for both 2010 and 2011 vote may make people's recollection of how they voted more accurate, thus removing the objection to 2010 weighting.  That notion is extremely speculative, but even if it was true, it would only work if the 2011 recall question was asked first.  If people's recollections of 2010 are faulty, the problem is hardly going to be rectified by a different question that is asked afterwards.  As it turns out, the TNS datasets clearly show the 2011 question was not asked first.

*  In dismissing the complaint that there is no way of telling from the TNS datasets what impact the 2010 weighting has had, he claims that "equivalent" information is given - and then points to numbers showing that the SNP and Labour were downweighted by roughly the same amount from their vote shares among the raw unweighted sample.  Frankly, that's a complete red herring.  Throughout the referendum, TNS were more often than not producing Labour-heavy raw samples.  It's quite possible that normal demographic weightings (and indeed 2011 vote recall weighting) boosted the SNP in this poll, but that was completely offset by distorted 2010 vote recall weighting.  There's no way of knowing that for sure, though, because as far as I can see the information simply isn't available in the datasets - neither in literal nor in "equivalent" form.

15 comments:

  1. I'm certainly no expert in polls or reading polling data, but if they do release their data, for recalled voting for 2010 as well as 2011, then can't we simply compare this to the actual results in both of these elections?

    For instance, if 85% say they voted and of these say 50% say they voted SNP, for the 2010 election and the actual result was SNP 25% in a 53% turnout, then we know that the peoples recollection of whether they voted is suspect, and that people who claim that they voted SNP, are mistaken.

    This would then give us a true picture of the shy vote or how shy people were when asked by someone who was Female/Male Young/Old, English/Scottish/Other, etc.
    We would also need to record whether women told women the truth, Scots told English lies etc etc.

    I'm sure that eventually we would see patterns emerging that would explain why the figures released by pollsters differ so widely from each other.

    ReplyDelete
    Replies
    1. "I'm certainly no expert in polls or reading polling data, but if they do release their data, for recalled voting for 2010 as well as 2011, then can't we simply compare this to the actual results in both of these elections?"

      Unless I'm missing something (and I've checked as thoroughly as I can) only the weighted numbers are available on the vote recall question, which tells us nothing at all, because unsurprisingly they're in line with the election results. It's the unweighted numbers we need.

      Delete
  2. "He claims that we "know" SNP supporters are more eager to respond to polls than others"

    The TNS pollitself disproves taht theory, as it shows that intended turnout is no higher than 2010. If the SNP supporters were more eater to respond to a poll, then they would definitely turnout, so, QED, his theory is disproven.

    ReplyDelete
  3. Aug, Sept, Oct TNS unweighted base recalled vote average compared to 2011 result:

    40(-5%) SNP
    38(+6%) Lab
    13(-1%) Con
    6(-2%) Lib
    4(+3)% Other

    Can't see what was the case for the latest poll as they didn't cross-tab. This pattern was totally consistent in the iref; TNS sometimes had Labour winning 2011 in their initial random sample. Always a much closer result than was the case which shouldn't occur unless they have a problem getting SNP voters.

    Data suggest Labour/more No voters keener to answer the door than Yes/SNP, or at least more likely to be at home.

    ReplyDelete
  4. "He suggests that TNS-BMRB's decision to ask for both 2010 and 2011 vote may make people's recollection of how they voted more accurate, thus removing the objection to 2010 weighting"

    Survation 2011 unweighted base recalled vote January poll:
    17(+3)% Con
    32(nc)% Lab
    9 (+1)% Lib
    42 (-3%) SNP

    Not far away huh?

    Same poll, asking for 2010 recall:
    19(+2)% Con
    36(-6)% Lab
    12(-7)% Lib
    32(+12)% SNP

    Lots of examples of this. I recall one poll even stuck DC becoming PM in the 2010 question to make it clear and the same thing still happened.

    ReplyDelete
  5. This morning TNS released a new Scottish poll. Topline Westminster voting intention figures are CON 16%, LAB 31%, LDEM 4%, SNP 41%, GRN 6%, UKIP 2% (tabs here). Under normal circumstances these would obviously be good figures for the SNP, but these are not normal circumstances and it’s a much smaller SNP lead than that suggested in recent polls by YouGov, Survation and Ipsos MORI.
    Unlike their GB polls which are now done online, TNS’s Scottish polls are still done using face-to-face interviews. This means the fieldwork tends to take significantly longer, and the polls are then often not reported until a week or so later. The fieldwork for this poll was conducted between the 14th January and the 2nd February. This means the Survation and MORI polls from last month which showed 20 point and 28 point SNP leads for the SNP had fieldwork done at the same time as the start of this poll. The YouGov poll last week which had a 21 point SNP lead had fieldwork done at the same time the fieldwork for this poll was finishing (so is mostly significantly newer than this one!). What this means is that much of the reporting and headlines on this poll are just rubbish – the poll does NOT show the SNP lead falling. It shows a smaller SNP lead – this may well be for methodological reasons, or perhaps a bit of random sample variation, but given the respective timing of the fieldwork it cannot be that public opinion has changed since the previous poll showing a 21 point lead, as this poll was mostly conducted before that one. It’s a thoroughly bad idea to try and draw trends between polls conducted using very different methods anyway, but certainly check when the fieldwork was done and get them in the right chronological order.

    ReplyDelete
  6. In other news... Australia to compete in Eurovision!?!

    http://www.bbc.co.uk/news/entertainment-arts-31380742

    ReplyDelete
  7. YouGov sub-sample: SNP 43, Lab 30, Con 16.

    In line with recent YouGov readings. Lab slightly higher and Con a bit lower, but nothing to write home about. Particularly when you consider that the overall GB result is quite Lab (and "big two parties") friendly (Lab 35, Con 33). It's a continuation of the recent trend for the small parties, except the SNP, to be squeezed by the big two.

    ReplyDelete
  8. It was only a short time ago tat SNP had very little expectation of increased Westminster seats, the assumption was people would go back to the usual Holyrood/Westminster pattern of voting.
    There is no comfort for Labour in these results and they still work on the basis that some new tactic will bring Scotland to its senses!
    Ownership syndrome remains, though depressed Wales has an even more advanced form.
    Interesting listening to Nicola Sturgeon speaking in London. She is simply putting forward the ideas, and this includes, in a natural way, working with other parties across the UK, while Murphy adopted the tactic of pretending to be more Scottish!

    ReplyDelete
  9. Putting aside poll accuracy, I wish to consider the EFFECT polls have (on voters). This alleged narrowing between SNP & Labour, I feel, is very much 'a good thing' for the SNP, as it dispels the dangerous notion among SNP voters, that the SNP winning over 40 MP seats, is a foregone conclusion. Each and every vote will count and we must work tirelessly to make our dreams a reality. Also is Tories are being urged to vote tactically for labour to keep out the progressive, left of centre SNP, then each and every Green voter, SSP etc voter MUST vote tactically too and vote SNP, otherwise in FPTP their vote for Greens etc. plays right into the right wing red/blue tory hands.

    ReplyDelete
    Replies
    1. addendum: '"Also is Tories" third line from end, should read "Also if Tories". Apologies.

      Delete
  10. James, apologies for not being able to reply till now. I don't actually think there is much difference between us on our opinions of the TNS poll. Like you I think that "The explanation must lie elsewhere - perhaps in the face-to-face data collection method, or perhaps in the way TNS pose the questions" and indeed I said as much on UKPR this morning:

    The main problem that TNS have is this face-to-face methodology. This means that all the participants have to be prepared and bothered to invite someone into their home (and indeed have the landline phone to answer the request as I think they pre-phone). This not only intensifies the usual polling problem that the apathetic are under-represented, it means the houseproud are over-represented.

    This doesn't matter for their normal commercial work - the houseproud are the people you want to sell stuff to (and who have the money to buy them). But it may affect political polling in a way that can't be picked up by socio-economic indicators.

    Furthermore people may be unwilling to give an answer that they see as socially unacceptable to a live interviewer (I believe they actually put their responses into a computer and the interviewer can't see them, but people may still feel uncomfortable). You can see this in the fact that the last TNS poll before the referendum found 23% undecided (after eliminating those who wouldn't vote). This was far more than other pollsters found and simply not plausible. This particular TNS has Undecided 26%, Refused 6%, Would not vote 10%. Compare this to MORI's equivalent (phone) figures of 18%, 2% and 3% - online polls are even lower of course.

    These things may also interact. It's interesting that the 'working class' C2DE segment shows the SNP leading Labour by only 40% to 37%. The YouGov which covered the same period had 44% to 32%. Other political choices may also be seen as more or less acceptable - hence the high Green score and low UKIP one (you also get those in MORI's phone poll).


    I'd also add that there may well be a problem with using 2010 vote recall - I've pointed to it myself in the past. In Scotland normally too many people say they voted SNP and too few that they voted Lib Dem.

    But there are equally problems with using 2011 recall. At this stage four year old memories may not be much better than five year one, but more to the point 24% more people voted in 2010 than 2011. There's some polling evidence that these people were not of the same political make up as those who did vote for Holyrood as well. And that's what you'd expect: if you're the sort of person who sees the SP as a jumped-up local council, you're less likely to vote in its elections.

    There's also the problem that it's the constituency vote that is asked for, but people may not be remembering that. Instead they say who they voted for on the regional list, which many regard as their 'real' vote. You can see this in other polls where maybe 5% of people say they voted Green - literally impossible as there were no Green constituency candidates.

    So there's no perfect way of using past vote to weight polls (this is why MORI won't even try) and using a combination of 2010 and 2011 seems worth trying to me - though we don't have details of exactly how they do it. Other pollsters do similar (Panelbase use Euro recall which is most recent but covers even fewer people). There is no standard model that we know works and pollsters should be trying different things. The way that the opinion polls overestimated Yes just before the referendum and overestimated the SNP before the Euros suggests that just using 2011 may not be the best option.

    ReplyDelete
    Replies
    1. Roger : Of course recall of any vote becomes less reliable as the years go by, but there's a very special problem with 2010, simply because so many people who voted Labour or Lib Dem in 2010 switched to the SNP just twelve months later, and in many cases it's voting SNP that they remember. That's why 2010 recall is especially unreliable and shouldn't be used at all. If 2011 weighting isn't sufficient, it would surely be far better to add on recalled referendum vote weighting, rather than 2010.

      Delete
  11. To deal with some of your more specific points; the thing is that the Ashcroft polls were pretty much in line with expectation from other polling. They actually show a bigger swing than the more general polling, but that is probably due to this batch being more in Yes areas. Of course it's possible that things were even worse for Labour (and I wouldn't be surprised to see Glasgow NE fall with the others) but they certainly indicate that the Ashcroft policy of using 'raw' 2010 targets doesn't produce a vastly different result from pollsters who use other weights.

    There's actually a lot of evidence that Yes voters are over-represented in most polling (I assume you'd accept that there is some correlation between voting Yes and SNP), after taking all other factors into account. You can see this in those pollsters who ask how people voted in the referendum - some indeed also weight by that because of the problem otherwise. I've discussed here in the past how YouGov's figures need to be adjusted to take this into account.

    This isn't surprising. Yes/SNP voters seem to be more politically engaged on average, so you'd expect them to be more likely than average to respond to surveys. Nationalists are indeed more 'eager'. It may also mean that they are more likely to vote and that in turn could mean that the discrepancy matters less, but there's no doubt it's there.

    The poor showing for Yes in MORI polls through most of the campaign may paradoxically confirm this. Telephone polls may exaggerate the perceived lead that one side has. When No was thought to be ahead Yes voters may have been 'shyer', or more likely, just not have felt like responding to a survey. So MORI showed the biggest No leads. Then when the result was felt 'to close to call' in the media coverage, the No lead collapsed in MORI's last two polls to be as tight or tighter as the other pollsters as Yes voters became eager

    So MORI's two polls since the referendum may reflect a situation where No voters are shyer. Not only are the SNP very high but the Conservatives are particularly low. The answers that it has felt 'socially acceptable' to give may have altered or at least widened.

    When I made the passing remark about how asking both 2010 and 2011 vote might increase the accuracy of both, I hadn't found them in the tables and realised question order. However it seems plausible enough as a general principle and it's even possible that the face-to-face format might encourage people to revise previously given answers. As I say above the real point is that there are problems with all the methods of recalled voters - though possibly not as much as not using any of them.

    As to the TNS past voting tables, while they don't show the weighted and unweighted numbers for past voting are, we can see the weighting on the Westminster and Holyrood voting intentions where both sets of figures are shown. The rows and columns are swapped round if you like. So it is roughly equivalent and large alterations caused by past vote weighting in one will show up in the other. That doesn't seem to be happening this time.

    It's a mistake to go on past performance with these sort of polls in any case because they are probably even more prone to getting 'socially acceptable' answers than the phone polls are. And what people felt comfortable saying last April might not be the same as now. How people actually intend to vote is almost certainly hidden away in those implausibly high numbers for those claiming that they don't know or won't say how they will vote.

    ReplyDelete
  12. Ah the perils of writing a really long comment over a really long and interrupted time. My UKPR comment was actually yesterday morning if anyone can't find it.

    ReplyDelete