I've been having an interesting exchange with Bill Walker (hopefully not THE Bill Walker!) at the bottom of the thread about Panelbase's 'stealth preamble', and as most people probably won't see it there I thought I'd repost it.
Bill Walker : The main explanation that's been put forward for the SNP's "Yes 1 point ahead" poll is that it contained some leading questions immediately beforehand.
Specifically, it asked whether "Scotland could be a successful independent country" (responses to this are always very positive).
Then it asked whether respondents trusted the Scottish government more than Westminster (most people say they trust the Scottish government more when this question is asked).
It was only after those two questions that they asked whether people supported independence. It's well established that the questions you ask immediately beforehand in a poll can lead people in a certain direction - given how out of synch that poll is with all the others that would appear to be a reasonable explanation for it.
None of which is to say that all of the other polls are perfect, but this is why it's useful to use averages from different polls rather than relying on any one in particular.
Me : "It's well established that the questions you ask immediately beforehand in a poll can lead people in a certain direction"
The operative word there being 'can'. It's equally well established that the wording of a question can lead people in a certain direction. The poll showing Yes in the lead used a different and much more neutral preamble, so as things stand we simply have no way of knowing whether the question sequence or the neutrality of the wording was responsible for producing such a radically different result. It may, of course, have been a combination of both factors.
Bill Walker : Of course the wording of the question/preamble is also important, but what we're talking about here is one poll that showed the Yes vote a significant margin ahead of any other poll that's been done on the subject in the past two years. Operating from the standpoint that it did something to raise the Yes vote in comparison to the rest of the polls is pretty reasonable (we have a pretty solid hypothesis given that no other poll I know of asked those questions beforehand, whereas plenty of other polls have used neutral wording in the preamble).
The same thing can also be said of the polls that showed extremely large levels of support for No, which is why an average from polls gives a better picture.
Me : "Operating from the standpoint that it did something to raise the Yes vote in comparison to the rest of the polls is pretty reasonable"
Precisely. And what sticks out like a sore thumb about that poll is that we now know that it was the ONLY Panelbase poll not to use the subtly biased preamble quoted at the top of this post. Yes, it was also unusual in that it asked two questions (which were not leading, by the way) before the main referendum question. So we have two equally plausible explanations for the radically different result, and literally no way of knowing which is the most likely.
On your wider point, trying to compare that poll with non-Panelbase polls is like comparing apples with oranges.
Bill Walker : It's a standard opinion polling technique that's been used successfully in countless other contexts. The point being that every poll/polling agency has some potential flaw in its methodology and by pooling them you can eliminate most of the noise and get a more accurate picture of voters' actual opinions(e.g. using Bayesian methods similar to Rob Ford and other academics).
So let's be clear about what we're actually implying here. This isn't about the natural uncertainty over how one particular bit of wording or question sequence can skew the results; what we're saying is that this one isolated Panelbase survey has the most neutral wording/question sequence of all the professional polls published on the subject in the past two years. That's the only reason we could have not to use an average - i.e. if all of the other polls are systematically flawed or subject to bias in some way.
Incidentally, I completely agree with RevStu that the preamble isn't particularly leading in any case. It's only marginally different from the preamble used in the SNP poll. Instead of saying independent from the UK it says an "independent Scotland" - that might on some subtle level have an impact (and potentially a misleading impact at that), but it's a bit far fetched to expect that to cause the kind of radical swing we're discussing here. We'd certainly need far more evidence to go on before arriving at that conclusion and effectively writing off methods like the one I've linked to above.
Don't get me wrong, it would be nice if you were correct, but from a political science perspective I think we're a bit out on a limb here.
Me : "The point being that every poll/polling agency has some potential flaw in its methodology and by pooling them you can eliminate most of the noise and get a more accurate picture of voters' actual opinions"
That seems to me like a utopian vision of what it is possible do with statistics. The reality is that there are countless examples around the world of the "extreme outlier" poll turning out to be the correct one, and it usually happens when most pollsters share certain methodological mistakes in common, and are reinforced in their false belief that they are getting it right by the results produced by their competitors. In a UK context, the most obvious examples are 1970, where just one eve-of-election poll had the Tories ahead, and 1992, where all pollsters underestimated the Tories' true position to such an extent that even the most extreme outlier during the entire campaign failed to predict how convincing the Tory win would be. Lesson : pollsters should be going back to basics and concentrating on getting their own methodology right, rather than slipping into the kind of group-think you are suggesting of assuming that they must be right if they're producing numbers that are somewhere close to the average.
You're talking as if the preamble issue is the only criticism anyone is making of the other pollsters. It isn't. For example, it was just two days ago that Scottish Skier pointed out that YouGov have a disproportionately high number of people in their sample who were born outside Scotland, and have a suspiciously low number of people who choose "Scottish" as one of their national identities. Similar criticism has been levelled at Ipsos-Mori (who incidentally also appear to use a 'stealth preamble', so we have no way of judging how neutral or leading it may be). Survation's weighting procedure in their one poll to date was so plainly wrong that even Professor Curtice didn't mince his words - they were clearly understating Yes and overstating No.
I think you (and RevStu if he takes the same view) are absolutely, fundamentally wrong in believing that this Panelbase preamble is innocuous. We have considerable evidence that the wording most likely to lead respondents towards No is "completely separate from the rest of the United Kingdom". As I've stated several times now, this preamble isn't quite as extreme as that, and you're quite correct that the word 'independent' is neutral in a way that 'separate' is not. But the words "from the rest of the United Kingdom" are present, which means that this preamble is 'on the spectrum' of bias. Those words are entirely superfluous in explanatory terms, so what you have to ask yourself is this - what are they actually doing there, and what effect are they having? Unless you have hard evidence that they are having no effect, it's naive and complacent to assume that must be the case.
"That's the only reason we could have not to use an average"
I assume from those words that this must be your first visit here, because I've been running a Poll of Polls based on three different averages of BPC pollsters for several months now - and to the best of my knowledge I'm the only person doing anything like that in such a structured way at this stage. But I pointed out from the outset that the best claim that can be made for an average is merely that it is likely to be less inaccurate than any other system that might be devised, as opposed to there being any firm basis for thinking that it is necessarily going to be particularly accurate. To go back to the 1970 example, an average of the polls would have shown a picture much further from the truth than the one outlying poll. The BBC Poll of Polls in 1992 showed a Labour lead of 1% on the eve of polling day. Actual result? Tory lead of 8%. Again, the most extreme outlying poll was closer to the truth.
If we had a spread of bias in the polls, an average might work, but as the vast majority are biased to pro-union, an average does not work.
ReplyDeleteFirst I should say I'm not the famous Bill Walker, just a regular Bill (unfortunately/fortunately).
ReplyDeleteLet's be clear, though, I'm not saying that the preamble has no effect - nobody really knows what effect it has on the results. However you can say the exact same thing about the ordering of the questions. If anything that seems a far more likely case of something that could skew the results given multiple surveys in the past have shown that those two questions promote very large pro-Scottish responses (Scotland can be an independent country, and the Scottish government is more trustworthy than Westminster).
At the moment if we're implying that this outlier is more likely to be correct than every other poll on the subject, then we're also assuming that the question order has a negligible effect. In other words we're privileging one type of bias over another - given no other polls that I'm aware of have either used this preamble or the same ordering of questions.
Polling methodologies are flawed and we can come up with countless examples through the years of them getting it wrong. However that in itself doesn't give any reason to think a particular outlier is accurate. If you put on a blindfold and throw 100 darts at a list of percentages then you might hit the right one, but there's no way to know which of the 100 darts is correct.
While we can offer some qualifiers about never really knowing whether an opinion poll is accurate or not, that doesn't get us to the conclusion you're trying to draw here: that this poll is correct and every other one is wrong.
A far better route if you want to look for some optimism is to look at the trends of polls using the same methodology - given you can eliminate most of the problems with bias (although you also have to be careful that the changes are beyond the margin of statistical error, which is about 3% in most polls).
"At the moment if we're implying that this outlier is more likely to be correct than every other poll on the subject, then we're also assuming that the question order has a negligible effect."
ReplyDeleteBut I am categorically NOT implying that, and I'm struggling to understand how you ever formed the impression that I am. To reiterate yet again what I am actually saying, we HAVE NO IDEA whether the different result was caused by the neutral preamble, or by the question sequence. To repeat what I said in my very first reply to you, it may well be a combination of both factors.
"doesn't get us to the conclusion you're trying to draw here: that this poll is correct and every other one is wrong."
Again, that is NOT the conclusion I'm trying to draw, and I'm baffled as to why you think it is.
"If you put on a blindfold and throw 100 darts at a list of percentages then you might hit the right one, but there's no way to know which of the 100 darts is correct."
Which is what I was getting at earlier - that is the case that can be made for thinking that an average is likely to be less inaccurate that any other method that can be devised. But there's no reason to suppose that it will turn out to be particularly accurate, or that it will be closer to the truth than any given outlier.
Incidentally, there IS a way to form a rational view on which polls are more likely to be accurate - and that is to consider the merits of their methodologies. Which is exactly what we're trying to do.