On the last thread, Chalks pointed to the doubts cast by John Curtice on the claims from the Yes campaign that their pursuit of the votes of the "missing million" means they are picking up support which is being missed by the conventional opinion polls. Curtice bases his scepticism on an analysis of the voting intentions of poll respondents who say they didn't vote in the 2011 election, and who he claims are actually shown to be more likely than others to be No voters. Straight away, that sets a number of alarm bells ringing in my head, so here is your cut-out-and-keep-guide to why you should at least maintain a healthy scepticism about Curtice's scepticism...
Four out of six of the active pollsters in this campaign conduct their fieldwork among volunteer online panels. One point I made in the interview for the Phantom Power film a few weeks ago (and which didn't make the final cut) is that to the extent that online pollsters have proved their credibility in recent years, you could easily make the old joke : "I know it works in practice, but does it work in theory?" Polling among volunteer panels shouldn't work in theory, so the firms in question don't even worry about that - but they do go to great lengths to make sure their results are as accurate as possible in practice by 'working backwards'. If a general election has just happened and a pollster's findings weren't quite right, they ask themselves how they could, for example, tweak their weighting procedures so that their raw data would fit the actual result. If they also have evidence that the tweaked methodology would have produced reasonably accurate results in previous elections as well, then bingo, they've got a refined methodology that works in practice and will probably work in future elections as well.
But that's where they may be coming slightly unstuck with this referendum, because they don't have any baseline to work from to test that they are getting their methodology right - there have been no independence referendums before, and we know this contest is going to be radically different from normal elections, partly because of higher turnout and higher levels of voter registration. That doesn't necessarily mean any given pollster is bound to be getting it wrong, but it does increase the degree of uncertainty (hence the unusual amount of variation between different firms' findings), and the biggest area of uncertainty for any online pollster will be people who usually don't vote, and people who haven't previously been registered to vote. There simply aren't enough of those people "on the books" of volunteer online panels - in normal circumstances there don't need to be, because online pollsters only need to be right in practice, and in practice the 'missing million' don't count at all in normal elections.
So we should certainly doubt any analysis of the voting intentions of previous non-voters that draws too heavily on the findings of online firms. There is also room for doubt with Ipsos-Mori, who by definition are only reaching people who are willing or able to answer a landline telephone, meaning there is no way of knowing whether enough of the 'missing million' are being contacted. The one firm who in principle should be delivering the goods is TNS-BMRB, who actually go out into the real world and knock on people's doors - so they ought to be reaching people in the most deprived communities as easily as they are reaching John McTernan's butler. And, unfortunately, it's true that TNS-BMRB have tended to be one of the more No-friendly pollsters (albeit usually not quite as No-friendly as Ipsos-Mori or YouGov). But it also has to be borne in mind that their numbers go through a very unusual weighting procedure. For example, here's what happened in the TNS poll conducted in June -
Only 226 people were found who said they didn't vote in 2011, but they were upweighted to count as 320 people.
124 people were found who couldn't remember how they voted in 2011, but they were upweighted to count as 173 people.
As far as I can see, the logic for upweighting both groups is the same - TNS seem to think that all of these respondents are representative of non-voters from 2011 (and indeed of previously unregistered voters). Quite why people who don't recall how they voted should be automatically treated as abstainers is a bit of a mystery, and that's the first red flag we need to raise about the TNS approach, because in most cases it's the upweighting of the "forgetters" that is actually helping No the most. If we assume for the sake of argument that many of these people did in fact vote in 2011 but genuinely can't remember how, then that significantly changes the picture that the TNS data is providing about the 'missing million'.
But the broader question is why such sharp upweighting needs to be happening at all? I think there are two factors at play here. Firstly, there must be a lower response rate to TNS polls among people who don't usually vote (ie. those people either won't answer their door, or will be more likely to turn the interviewer away). So that introduces at least a degree of the same uncertainty that applies to the other pollsters. The second factor is one that Professor Curtice hasn't even acknowledged, as far as I can see - it's likely that a significant minority of people are lying, and are telling TNS they voted in 2011 when they didn't, largely out of embarrassment. Those people will presumably give as their "vote recall" the party they would have voted for if they'd actually made it to the polling station, or perhaps the party that in retrospect they'd like to think they would have voted for. So there will actually be members of the 'missing million' who are telling TNS they voted SNP in 2011, and who are being weighted accordingly.
None of this is to say that a high turnout and a high level of voter registration is bound to favour Yes - but I do think Curtice's specific objections to that notion are not based on particularly solid ground.
* * *
A small correction to yesterday's post : Ivor Knox of Panelbase sent me an email earlier today to clarify that all of his firm's polls for the Sunday Times have been commissioned by the Scottish edition of the paper, whereas the upcoming YouGov poll has been commissioned by the UK edition. So Panelbase haven't been sidelined, although Mr Knox stressed that he couldn't say whether they'd be conducting any further referendum polls for the Sunday Times.
That still leaves the mystery of who commissioned the Panelbase poll that has been in the field this week - perhaps we'll find out over the next couple of days.
* * *
This may not be an entirely original observation coming from me, but Political Betting's editor Mike Smithson hasn't exactly been covering himself in glory of late. I've just spotted this tweet, which refers to a comment from Ivor Knox about protecting the client's right to confidentiality -
"My reading of @PanelbaseMD Tweet is that he has new IndyRef poll which client doesn't want to publish #BadforYES?"
If Mr Smithson had been paying attention, he'd know it was established beyond reasonable doubt several days ago that there was an unpublished Panelbase poll conducted last week, probably for the Yes campaign. That may mean that Yes failed to get a significant further boost in that poll, but that's pure speculation, and even if it did happen to be true, the fieldwork ended three days earlier than in the breakthrough YouGov poll. There has also been a second Panelbase poll in the field this week, but I know of no evidence that it has been withheld - indeed it may not even be completed yet. I first heard about it on Wednesday.
UPDATE : There's confirmation from Teri in the comments section below that the fieldwork for this week's Panelbase poll didn't conclude until today (Friday), so Smithson's claim about a "new" poll being withheld is complete and utter nonsense.
And a special message for Neil Edward Lovatt (for I know you're reading this) : please stop using this blog as a bogus "source" for your eccentric rumour-mongering on Twitter. You're a disgrace to "risk assessors" everywhere.