I think it's probably fair to say that it's been a great many decades since US media coverage of an American election has referenced British politics so frequently. That's a sign of how Brexit matters internationally in a way that routine changes of government don't, but it's also a sign of Trump supporters casting around for glimmers of hope wherever they can be found. The theory is that support for Trump is vaguely similar in character to support for Brexit, and that if the latter was underestimated by the polls, there's no reason to think the former isn't being underestimated as well.
From this side of the Atlantic, there have been two reasons commonly cited for why that is probably wishful thinking. The first is that we knew in the days prior to the EU referendum that the postal votes that had already been cast were painting a different picture from the opinion polls, and that Remain had a significant deficit to overcome on polling day itself. On the whole, the opposite seems to be happening in the US at the moment, with the early voting data tending to look more promising for Clinton than for Trump. The second reason is that, supposedly, the opinion polls in the EU referendum were nowhere near as inaccurate as portrayed.
The first reason makes perfect sense to me, but I have to say I think the second one is pushing it a bit. This goes back to the meat of a rather unpleasant (and now largely deleted) argument I had with a New Statesman journalist and a few others on Twitter in August, on the topic of "can we ever trust the polls again?". Immediately after that exchange, I had been planning to write a blogpost setting out my thoughts, but I decided against it because the whole thing had become too heated. However, this may be a good moment to make some of the points I had been planning to make that night (but leaving aside the personalities involved, obviously).
* First of all, it really must be understood that the standard 3% margin of error in individual opinion polls does not provide any sort of alibi for the polling failure in June. If the methodology used across the industry is basically correct, the error on the polling average should be considerably lower than 3%. For example, if one campaign is actually on 44%, you would expect just as many polls to have that campaign one, two or three points below 44% as have them one, two or three points above 44%. The underestimates would balance out the overestimates, and you would end up with an average that is pretty close to being bang on the money. So it's a form of sophistry to look at the string of late polls that overestimated the Remain vote, and claim that the ones that fell within the margin of error (or came close to doing so) were all technically "accurate".
Regular readers of this blog will remember that I had been completely open to the strong possibility of a Leave victory throughout the referendum campaign, but when the last polling numbers came in on 23rd June, I finally threw my hands up in the air, and said that if the polls were right, there was clearly going to be a Remain victory of some sort. My exact words were -
"Leave can only really win now if there's been some kind of systemic problem with the public polls - although that's scarcely unheard of."
I entirely stand by that summary, and exactly the same is true of the situation in the US right now. Donald Trump still has a chance, but that categorically isn't because he's "within the margin of error". He may be within the margin of error in individual polls, but if he was really tied with Clinton, and if the polls were getting it right to within the margin of error, there ought to be as many polls putting him three or four points ahead as there are putting him three or four points behind. Self-evidently, that isn't the case. The reason he still has a chance is because it's fairly common - as our own referendum demonstrates - for polls to be misleading due to factors that are not taken into account by the standard margin of error. That 3% wiggle-room only allows for normal sampling variation, and basically assumes that the underlying methodology is otherwise going to be perfect - which is pretty optimistic in this day and age.
If Trump wins, or if Clinton wins much bigger than we expect her to, it'll be because the polls were wrong, just as they were on Brexit. Not necessarily wrong by all that much, or by a historically unprecedented amount, but certainly wrong in a way that the margin of error can't account for. (Although polling firms will doubtless attempt to make that excuse by cherry-picking individual polls.)
* It's been suggested a number of times that the EU referendum polls were much more accurate than supposed, because people tend to only look at the last batch of polls, and ignore the ones earlier in June that were more favourable for Leave. That's plainly a load of nonsense, because the reason why the later polls moved towards Remain is remarkably simple - there was almost certainly a genuine swing towards Remain as polling day approached.
The word "accurate" is a bit slippery when used in relation to opinion polls, because strictly speaking, and with the obvious exception of exit polls, all polls are snapshots of public opinion rather than predictions of election results. A poll can be an accurate snapshot even if it differs markedly from the final outcome. Nevertheless, if "accurate" is used to mean closeness to the final result, it's perfectly reasonable to say that later polls should be more "accurate" than earlier ones, because the closer you get to election day, the more people have made up their minds. Therefore, the fact that the EU polls got progressively less "accurate" towards the death of the campaign makes it worse for the polling industry, not better. It strongly implies that there was a significant in-built error all along. When Leave appeared to be slightly behind, they were actually slightly ahead. When Leave appeared to be slightly ahead, they actually had a decent cushion. And so on.
* One of the apparent saving graces for the polling industry in June was that, against all expectations, online polls proved to be somewhat more accurate than telephone polls. Nevertheless, the performance of the online polls was significantly tarnished by a Populus poll published on referendum day that was absolutely miles out from reality - it gave Remain a 55% to 45% lead. It was suggested to me that somehow that poll doesn't really count, because it was the only published Populus voting intention poll of the entire campaign, and is therefore difficult to put into proper context. I must say I can't make head nor tail of that line of argument. We know that Populus had been conducting extensive private polls throughout the campaign, meaning they'd had as much opportunity as any other firm to hone their techniques. It may well be that a 55%-45% lead was an outlier from their normal results, but it shouldn't have happened at all if their methodology had been essentially sound. (Even the occasional 'rogue poll' that statistically will happen one time in every twenty shouldn't really be out by as much as 7%.)
So, yes, that Populus poll does deserve to be treated as an online poll like any other, and the fact that it was one of the final polls of the campaign (when it should have been more accurate, not less) does detract from the notion that online polls in general performed tolerably well.
* * *
To return to the original question, I think the simplest way of putting it is this. If you want polls to be as accurate as the industry claim them to be, then you can't and shouldn't trust them, because recent history suggests you'll often (but not always) be disappointed. If, however, you just want a ball-park sense of public opinion that is more reliable than, say, Neil Lovatt's beloved betting and financial markets, then yes, polls are still a very useful tool, and the outcome in June bears that truth out. It really just depends on how demanding your own expectations are.