Many of you will recall that the famous YouGov poll on the penultimate weekend was not in fact the first time in the long referendum campaign that Yes had been in the lead. The first time was a whole year earlier, when a Panelbase poll commissioned by the SNP put Yes ahead by a wafer-thin 1% margin. But that poll was immediately rubbished by John Curtice, who claimed it had no credibility because of an unusual question sequence - the voting intention question had been asked third rather than first, and had been immediately preceded by a question that might be construed as leading. He thereafter invariably referred to that poll as "a much-criticised poll from Panelbase", which was slightly amusing given that he was the one that had done the vast bulk of the much-criticising. It effectively amounted to "John Curtice says the poll is bad because John Curtice says the poll is bad because..." and so on into infinity.
Bearing in mind that he made such a song and dance about the unreliability of that referendum poll, it's a tad troubling that Professor Curtice didn't bother to flag up that this weekend's Panelbase poll (showing a cut in the SNP's lead to "only" 10%) has an almost identical flaw. We only found out about that yesterday when the datasets were published, but Curtice must presumably have known on Saturday or Sunday when he wrote his analysis. Once again, the voting intention question was asked third, and was immediately preceded by a leading question - but this time one that was intended to cast independence (if not the SNP specifically) in a negative light. The wording of the question points out that the oil price has fallen, which is something that some respondents will not have known or will only have been dimly aware of, and then presents this development as something that might affect the case for independence. The reaction that people are "supposed" to have is obvious. In response to the voting intention question that was asked immediately afterwards, it's noticeable that considerably fewer Yes voters from September said they would vote for the SNP than was the case in the Survation poll. That in itself can explain much of the big disparity between the two polls.
Can we know for sure that the result of the poll was affected by the question sequence? Of course not. But we didn't know that was true of the "much-criticised" referendum poll either - it was simply assumed to be the case because the result was so far out of line with all the other available information. That's exactly the position we're in again now. Until and unless Panelbase replicate the lower SNP lead in a poll with a more conventional methodology, I'll be inclined to regard their weekend poll as somewhat suspect.
We've had three new Scottish subsamples from GB-wide polls over the last 24 hours :
Ashcroft : SNP 58%, Labour 24%, Conservatives 8%, Greens 4%, Liberal Democrats 4%, UKIP 1%
YouGov : SNP 40%, Labour 33%, Conservatives 17%, Liberal Democrats 5%, Greens 4%, UKIP 1%
Populus : SNP 32%, Labour 28%, Conservatives 25%, Liberal Democrats 6%, UKIP 5%, Greens 3%
Populus are consistently the most pessimistic pollster for the SNP (with the possible exception of TNS-BMRB who report much less frequently), so their result is fairly average. The Ashcroft result is of course particularly good for the SNP, while the YouGov result shows a lower SNP lead than usual - due to the Labour vote being untypically high rather than the SNP vote being untypically low.
Enthusiasts for subsample cherry-picking, such as Mike Smithson and the new batch of trolls that we've welcomed to this blog recently, will doubtless be beside themselves with excitement to learn that a second successive YouGov subsample has shown a narrower gap. But those results were immediately preceded by a batch of subsamples from the same firm showing large SNP leads. So while it'll certainly be worth keeping an eye on tomorrow morning's result, the balance of probability is very much that we're merely looking at normal sampling variation.