I'm indebted once again to Calum Findlay for pointing out something that I hadn't noticed - YouGov are repeatedly showing a higher No vote among people who voted SNP in 2011 than any other pollster. As an illustration, here are the figures from the most recent poll released by each firm...
No vote among 2011 SNP voters -
ICM : 15%
TNS-BMRB : 13%
Survation : 17%
Panelbase : 14%
YouGov : 24%
The pattern is the same even after Don't Knows are stripped out...
ICM : 16%
TNS-BMRB : 17%
Survation : 20%
Panelbase : 16%
YouGov : 26%
Figures aren't available for Ipsos-Mori, because they don't weight by 2011 vote. But it's significant that TNS-BMRB are well in line with the other pollsters on this point, because like Ipsos-Mori and YouGov, they're one of the most No-friendly firms at the moment. So whatever the reason for YouGov producing such a high No vote among SNP voters from 2011, it can't be a factor that is generic to all No-friendly pollsters. Indeed, it may be a sign that YouGov are No-friendly for a completely different reason to Ipsos-Mori and TNS - which has always seemed intuitively likely, given that YouGov are so far out of step with all of the other firms that use online fieldwork.
As Calum suggests, the most likely explanation is the unique and rather eccentric weighting procedure that YouGov employ. Like all other BPC pollsters apart from Ipsos-Mori, they weight by recalled vote from 2011, but they make one crucial exception - they separate out the people (or at least some of the people) who voted Labour in 2010 and then switched to the SNP in 2011. Although SNP voters as a whole are often upweighted from the raw data, the upweighting usually occurs almost exclusively among the group that voted Labour in 2010. In the most recent poll, that group was upweighted from 55 real respondents to 105 'virtual' respondents, meaning that the referendum voting intention of every person within the group was effectively counted twice. We can reasonably infer a few things from this -
1) This group are probably producing much more No-friendly figures than SNP voters at large.
2) YouGov are struggling to find enough people in this group, so if the tiny sample they do have are in any way unrepresentative of Labour-to-SNP switchers, that error will be dramatically magnified after the weighting, and the same problem will occur in each and every poll. It's quite likely that exactly the same people are being interviewed over and over again.
3) If YouGov didn't split SNP voters into two separate groups, the Yes vote would probably be higher in the headline figures, and the No vote would probably be lower.
This is all guesswork, of course, because for some reason YouGov only ever reveal the voting intentions of SNP voters as a whole, after weighting has been applied - there's never a breakdown for the two distinct groups they use for weighting. It's very hard to understand why they would keep that information a secret, unless it shows such an improbable disparity between the two groups that eyebrows would be raised about the wisdom of the methodology.
To be fair, it's not completely impossible that there may be method in YouGov's madness - they were more accurate than ICM and Survation in the European elections, after all. But if, for example, the purpose of this weighting procedure is to correct for a Yes-friendly bias that is inherent in volunteer online polling panels, it's very hard to understand why a non-online pollster like TNS would be producing a No vote among 2011 SNP voters that is broadly in line with the other online pollsters.
In any case, what is so special about Labour-to-SNP switchers? There were two other important groups of switchers in 2011, namely Lib Dem-to-SNP, and Lib Dem-to-Labour (the size of the latter group was masked by the direct Labour-to-SNP swing). Surely if there's wisdom in separately weighting one of those groups, there must be wisdom in separately weighting the other two groups as well? Why the inconsistency?
Given Peter Kellner's well-known agenda on the subject of independence, it's sometimes very hard to escape the conclusion that he starts by working out what sort of headline numbers would "feel right" to him, and then works backwards to devise a methodology that will generate those numbers.