- Bearing Drift - https://bearingdrift.com -

Quick Polling Round-Up

There’s been a bevy of polls released in Virginia in the past several weeks, but before we dive in, let’s review two critical things to check whenever a poll is released:

Methodology

For brevity’s sake, we’ll do a quick look at three common polling methodologies:

– Live Caller: These use live callers, at call centers, to call tens of thousands of people to get a few hundred results. Most live-caller polls also call cell phones, which is more difficult because they can’t be auto-dialed and have a lower response rate. Check the released methodology on any poll to see if they called cell phones. The highest-regarded polls use this methodology, and are referred to as “gold standard” polls.

– IVR or Robopoll: IVR stands for “Interactive Voice Recording.” This is push-button polling, where a recorded voices speaks and you press buttons to respond (some fancier ones use voice recognition as well). By definition, it’s all automated and therefore cannot call cell phones, so some IVR outfits supplement their robocalls with an online methodology, or even live-caller, cell-only methodology. There’s a wide range of opinions on the validity and accuracy of IVR polling. Personally, I’m not a fan. As Nate Silver described it in his famous take-down of PPP (a partisan Dem robo-pollster), IVR polls can be used for forecasting (which is what most people want them for anyways), but it’s hardly a real survey.

Because IVR polls are so cheap to do, a lot of fly-by-night outfits use them. As such, there’s a lot of disreputable outfits out there who use a quick robopoll to generate headline-grabbing results, usually just to cater to a partisan audience.

– Online: As phone response rates continue to dwindle, there’s a clear search for the next “gold standard” methodology and a lot of people believe it’s online. Online polling has come a long way since the Zogby debacle of 2006, but it’s still not at the point where it can replace live caller surveys. There’s also the issue of considering online surveys to be “scientific.” By definition, a scientific survey has to give every person a theoretical opportunity to participate. Phone calls do that, as most everybody has phone (even if they refused to participate). Online surveys, however, are only conducted by those who opt-in to the survey panel. It doesn’t reach everyone with the Internet the same way a live caller survey reaches everyone with a phone. There is also to be considered the sampling effect (whether there is an inherent difference between those likely to sign up to give their opinions online versus those who are not) and mode effect (whether people answer a question differently when they’re responding to a question read to them versus a question they self-administer by reading on a screen).

Public vs. Internal

An internal poll is any survey commissioned by a campaign (or a campaign group, like a PAC or Congressional Committee) that was then released by the campaign (or group). In some cases, the released (or “leaked”) internal is a genuine internal strategic survey, used for message-testing or positioning or to stay on top of developing events (like a hurricane or court pick), and the result is good enough that the team decides to release the numbers. In many other cases, the released poll was conducted for the purpose of being released, either to show a race is more competitive than thought, or to respond to the other campaign’s released internal poll, or to raise money off it (some incumbents will even release a poll showing them tied or trailing, to spur fundraising).

You shouldn’t automatically discount every released internal poll, but there’s a few things to check to see how much attention you pay to it:

(1) Methodology

See above for the in-depth breakdown, but at the very least the poll release should include the methodology. If they don’t release how the poll was conducted, how many interviews, the fielding dates, whether cell phones were included, and who conducted the firm, then it’s not worth paying attention to.

(2) Polling firm

The best polling firms are partisan and work for campaigns. There’s great firms on both sides; ask some of your campaign friends, especially those in the polling industry, for thoughts on who the most credible ones are. If I see a poll conducted by a firm I trust, I’m more likely to pay it mind.

(3) Spin/”Analysis”

Check how the results were written, either in the release/email/social post, or oftentimes they’ll release the full polling memo. Numbers speak for themselves. The more spin, the less trustworthy the results.

(4) What other polls say

In some cases, the internal polling is the only look we have at the race. In that case, even if you discount the poll (for reasons other than the above), you have to kind of guess its in the ballpark. Sometimes, there’s public polling or other released internals, which can be used to corroborate or discredit the released internal.

Media polls are conducted either by a news organization or by another public-facing institution, most frequently a college or university. This cycle, two organizations stand out more than most: NYTimes (pairing with Sienna), and Monmouth University.

Monmouth has been releasing polling data on competitive districts once-a-week since early summer, providing public data on races that help observers re-think what will be competitive (and what won’t) this November.

The most remarkable development, however, is the NYTimes polling division, under Nate Cohn. They are undertaking an unprecedented experiment of calling dozens and dozens of districts and posting the results live, so you can actually see response rates and where the votes are coming in, geographically. I encourage you to check out their experiment overview here [1], and bookmark the site.

That said, not every media or college poll is as good as these two. Some have great track records. Some have house effects that result in skewed results. Some are just plain trash. FiveThirtyEight does a pretty good job of ranking pollsters [2] (particularly publicly released polls), so it’s a good gut-check at the least.

With that introduction, let’s take a look at recently released polls in Virginia:

VA-07 (Brat vs Spanberger)

Observers of the 7th District have been blessed to have two “gold standard” live caller polls released to the public, showing largely the same result.

NYTimes/Sienna [3]

Live-Caller, n=501
Sept. 9 – 12

Dave Brat (R) 47%
Abigail Spanberger (D) 43%

Monmouth University [4]

Live-Caller, n=400
Sept. 15 – 24

Dave Brat (R) 47%
Abigail Spanberger (D) 47%

Both polls show the race tied. End of sentence. There is no lead; the race is tied. The lead is within the margin of error, but that’s not how margins of error work anyways. To read the results properly, apply the margin of error to each data point. In the case of the Sienna poll, with a margin of error of 4.5%, Dave Brat could have anywhere between 42.5% and 51.5% of the vote, while Spanberger could have anywhere between 38.5% and 47.5% of the vote. Polls are not intended to be read as more precise than this range, despite modern political observers insisting on doing so. I’m going to emphasize this part here:

Polls. Are. Not. Predictions. They are a snapshot in time, showing a range of possibilities. They can give you an idea of how November will play out, but they’re not designed with the intent to predict the final result, so you shouldn’t use them for that purpose. The race is tied, which means prognosticators who give the race a “Toss-up” rating are the most prudent.

VA-02 (Taylor vs Luria)

We only have one poll to review here, and it’s an internal poll from Elaine Luria’s campaign. The link here is to Blue Virginia, because they embedded the polling memo from the pollster, so ignore the rest of the drivel:

Garin-Hart-Yang [5]

Live-Caller, n=404
Sept. 5 – 8

Scott Taylor (R) 43%
Elaine Luria (D) 51%

Using the same margin-of-error analysis as above, Taylor is between 38% and 48%, and Luria is between 46% and 56%, so technically speaking this one is tied, too. However, most people are going to pay attention to the 8-point lead, and they should. Most challengers who defeat incumbents never see a poll with them leading (due mostly to name ID), so a poll showing a challenger beating an incumbent (albeit a freshman incumbent) by eight points is noteworthy. How much should we credit this internal? Well, it’s methodology is clearly stated, it is a live-caller poll with cell phones, and it conducted by a respected Democratic firm. There are no public polls released here, and the analysis is biased, but not misleading. In my opinion, the onus is on those who discredit the survey to show a different result. That said, polls are a snapshot in time, and this survey was conducted in early September, when the news about the petition signatures was fresh. Two months is a lot of time in political campaigns.

VA-01 (Wittman vs Williams) 

Likewise, we have only one poll in this district. Again, it’s an internal poll from the Democrat:

Change Research [6]

Online, n=???
Aug. 29 – 30

Rob Wittman (R) 30%
Vangie Williams (D) 30%

Woo-boy, where to start. First, this is presented as a news article, but is clearly a campaign press release (complete with contact information at the bottom). There’s no methodology statement, just field dates. They don’t even mention how the survey was conducted, though the firm only does online surveys, which are not scientific. For that reason, there is no “margin of error,” but the press release proudly proclaims one anyways. Hilariously, the press release goes out of its way to refer to the survey as “scientifically conducted” (it wasn’t).

You should also be wary of any survey result that claims false precision by reporting results to the decimal point. That’s actually a hallmark of “survey” firms that were founded by folks who don’t have a research background. Sure enough, “Change Research” was founded last year by a Silicon Valley data scientist. I’m sure the firm has smart people working there, but being good at neurosurgery doesn’t prepare you to be a cardiologist.

Anyways, no, Rob Wittman is not polling at 30% in the 1st District. Ignore this poll.

VA-Senate (Stewart vs Kaine)

SRSS (for University of Mary Washington) [7]

Live-caller, n=800
Sept. 4 – 9

Stewart (R) 36%
Kaine (D) 52%

Roanoke College Poll [8]

Live-caller, n=512
Aug. 12 – 19

Stewart (R) 34%
Kaine (D) 51%

Cygnal/POOLHOUSE [9] [sic]

IVR, n=1,119
Aug. 22 – 24

Stewart (R) 45%
Kaine (D) 50%

It’s time to play “one of these things is not like the other.” The Cygnal survey used a robopoll that called only landlines, no cell phones, and got a wildly different result: not only does it show Stewart with 10% higher support than any other survey, but it also shows only 5% of voters undecided in August (compared to double digit undecided voters in the other two). Finally, its worth pointing out that while Roanoke College has a spotty electoral track record, SRSS is a premier non-partisan survey organization.