The State of the Political Polling Industry

Feb 3, 2015 | Politics

By

Going into election night, there were anywhere from 7 to 10 competitive elections, happening in Iowa, Arkansas, Alaska, North Carolina, Kentucky, Georgia, Colorado, Louisiana, South Dakota, Kansas, and New Hampshire. Traditional political evaluators such as Charlie Cook (Cook Report), Stuart Rothenberg (Rothenberg Report), and Larry Sabato (Crystal Ball based out of UVA), and poll aggregators such as Nate Silver (FiveThirtyEight), and news organizations such as the New York Times, the Huffington Post, etc. all generally agreed on these seats as competitive.

Here are the results:
election polling results
The results are clear

In Arkansas, incumbency didn’t matter, and Sen. Pryor suffered a dramatic defeat, outside the margins of error, and by a larger margin than most prognosticators suggested accounting for the race’s drift to the right in the waning days of the campaign.

Kentucky saw similar results, except it went in the face of reports of a tightening race in the campaign’s final days. In Iowa, a seat Harry Reid declared was key to the Senate, the Democrats lost big. In Kansas, a seat long seen as more likely than not to go for the Independent Greg Orman, the fundamentals of the state won out.

Polling in Colorado has always been finicky and this race had polled all over the map, including projections of a far more significant victory than Cory Gardner ended up winning. Democrats had held out hope in Colorado based on this polling history and internal polls of the Udall campaign showing a race within the margin of error.

In Georgia, Michelle Nunn was seen for a while as the strongest pickup opportunity for Democrats, but the fundamentals of Georgia proved too great to overcome, and she lost by 8%, despite polling putting her within striking distance (if not the lead) in the week leading up to the election. In Louisiana, the combined Republican ballots easily beat Mary Landrieu by 12% (she would ultimately lose the December runoff), and in South Dakota, Sen. Pressler and Rick Weiland split the anti-Round voters too much to be competitive at all. In North Carolina, New Hampshire, and Alaska, the polls were largely right, suggesting a race within the margin of error.

In the first three states, all “safe” Democratic seats in traditionally blue states, all were decided by a smaller spread than many of the supposed competitive races.

election polling results

In Virginia, the polls were off by a landslide. The Real Clear Politics average suggested that Warner would win by 9.7 points. Political scientists suggest that one of the factors that voters consider when deciding whether or not to go to the polls is if their vote will matter. In what was supposed to be a blowout for Sen. Warner, his supporters didn’t feel that their vote mattered, and barely enough made it to the polls. On the other hand, Ed Gillespie was able to run an aggressive campaign as the underdog, his supporters energized by the national mood which favored Republican candidates everywhere.

So what can we take away from this?

First, the old school prognosticators were more right than wrong, and arguably closer to the truth than the poll aggregators. Folks like Charlie Cook and Larry Sabato more heavily weighed the fundamentals of a race such as voter ID, demographics, and past turnout rates, and were mostly proven correct.

Second, pollsters didn’t do so well. This isn’t a new phenomenon either. In the aftermath of the 2012 election, Jim Messina declared that “American polling is broken.” 

In the 2012 primary season, pollsters suggested that Eric Cantor was going to win re-nomination by the Republicans in a 20 point landslide. This polling influenced his campaigning decisions, leading to a disengaged strategy where he carpeted the airwaves with attack ads and didn’t visit the district personally to connect with voters. Underfunded college professor David Brat, whose campaign was run by one of his students defeated Cantor in the primary by more than 10 points. Similarly, polling in the Mississippi Senate primary between Sen. Cochran and state Sen. McDaniel was all over the map, falsely predicting Sen. Cochran’s demise.

Polling issues persisted in 2013, where the Virginia gubernatorial race was supposed to be a blowout for Terry McAuliffe, when in fact Republican Ken Cuccinelli put up a respectable showing. Polls underestimated Cuccinelli’s support and turnout among Asians, rural voters, and young people.

For statewide races in the United States, ”pollsters typically use simpler models, and many place less emphasis on self-reported voter enthusiasm.” 

The evidence is clear. University of Virginia professor and publisher of Crystal Ball, Larry Sabato compiled the following data demonstrating the inaccuracies of polling.

polling averages in competitive senate races

Sabato further notes that polling averages call the race incorrectly 12% of the time. Since 2006, each incorrect average predicted the Republican would win, when a Democrat actually won. Looking closer, the same article shows that when the polling is within the margin of error, the likelihood of an erroneous prediction increases.

when polls turn out wrong

Polling has a margin of error, which marks most differences in close races as a statistical tie. Media outlets ignore these margins of error and characterize minor variations within the margin as signs of momentum. By the end of Election Day, one in three candidates locked in close races will have pulled an “upset.”

While poll aggregators such as Nate Silver have reduced the margin of error by combining polls, “Forecast models rely largely on calculating averages from public polls of varying sophistication, frequency and validity. In midterm contests, smaller organizations with limited budgets and expertise sometimes enter the fray,” reducing the quality of the inputs. Combined with fewer polls being conducted per race, and varying methodologies, poll aggregation doesn’t do much to increase the odds for the midterms.

Nate Silver polled pollsters, and they agree: “fewer people are responding to polls this year, compared to 2012, and more expect greater polling error.” 

The bottom line is this: Polling is crucial to how we understand the coverage of political campaigns and how campaigns run themselves. And polling hasn’t been too reliable.

Challenges to Polling

There are some justifiable reasons why polling has become more challenging. Traditionally, political polling has been done by major newspapers and other news organizations. As newspapers have lost circulation, they have cut funding and expertise to conduct as many effective polls as in years past.

From the way that questions are framed to the order in which they are asked and the affiliation of the group conducting the poll, confirmation bias is always a threat. Fewer and fewer people are willing to pick up the phone and talk about politics, especially beyond a yes or no answer about their voting preferences.

There is a multitude of other issues that impact polling, including a more transient and technologically savvy population. One in three households doesn’t use landlines at all, especially among young people. 

Targeting by area codes is increasingly inaccurate, and cell phone polling has notoriously low response rates (not to mention being exceedingly costly). Targeting demographics require either expensive targeted calls or risky modeling that gives disproportionate voice to small segments of voters. 

Our Assumptions

In 2014, Democratic affiliated pollsters missed the mark. In 2012, Republican-leaning pollsters fared poorly, dramatically making incorrect assumptions about the construction of the electorate. They assumed that President Obama would be unable to replicate his success from 2008, generating enthusiasm to vote among young people, African Americans, Hispanics, and single women. Pollsters underestimated the turnout of these demographics. For example, in 2008, Gallup’s final polls had 18 to 29-year-olds making up 13% of the electorate, as opposed to 19% in exit polls, which “seems like a huge difference considering that young voters went for Obama by 23 percentage points.” 

In 2012, Gallup was marginally closer, projecting that young people would make up 14 percent of the electorate when exit polls suggested 18 percent. Racially, they assumed that 78% of the electorate would be white, while exit polls reported about 74.5% 

Incorrect assumptions can turn a poll with a solid methodology into a false prognostication. In 2014, the Democratic strategy was to attempt to replicate the coalition that turned out for President Obama in 2012, expanding the electorate. This was an abject failure, as many demographics didn’t turn out in the numbers necessary to support Democrats, (such as 11% of young voters) but also because, in many states, Democrats lost constituencies that should have been a source of strength (Mark Udall’s numbers with women in Colorado are a prime example). Further complicating matters for Democrats and pollsters, many of the “crucial midterms are in states that don’t usually have close races. “The key Senate battlegrounds this year [were] also places like Alaska, Arkansas, Kansas, Louisiana, etc., where most of the public pollsters don’t have a ton of experience.” 

In 2012, the Obama campaign registered hundreds of thousands of new voters across presidential swing states, an unprecedented growth in recent voter registration, stemming largely from the efforts of the Obama campaigns. (The 2012 campaign collected 60% more voter registrations in battleground states than in 2008). 

These new voters are hard for models to predict, given their lack of voting history. Of all the challenges facing pollsters, the assumptions they make when building models are the most serious and most difficult and have the greatest potential to significantly impact polling in the 2016 elections.  

What will happen in November 2016?

We don’t know. Polls that try to measure the comparative strength of potential candidates are not likely very helpful, given low name recognition of most candidates, the selection bias of those actively participating in such polls, and the wide variation in potential candidates that are included. While big data has revolutionized political campaigns, the data is only as good as its collection, and the assumptions behind it. Campaign strategists In the 2014 elections, polls say that the Republicans are more likely than not to win control of the Senate. But are they? If the polls that are being asked aren’t accurately considering the composition of the electorate, then what do we really know about the coming election? Little enough to none. For those interested in the outcome, tune into the 2016 primaries, because polling can only tell us so much, and we’re in for a few surprises come election night.

Topics: Elections

0 Comments