Anxiety about the 2024 election is high. Both sides know that their candidate might lose — and they want polls to tell them just how scared to be.
Right now, polls give us the clearest answers to those questions. But there’s a problem: Every year, polling gets tougher. We pollsters face three core challenges that threaten the accuracy of all political surveys. Nobody has solved them, and it’s not clear anyone can.
Here’s what we’re up against:
Challenge #1: Almost nobody wants to talk to pollsters — and those who do might be weirdos.
Polling is built on a simple idea: If we talk to a representative, miniature version of a state or country, we can estimate what the whole state or country thinks. It’s like getting a sample at an ice cream shop: one well-mixed spoonful tells you what a whole cone will taste like.
Nonresponse makes good data rare and expensive.
But it’s getting tougher to reach the people needed to build that mini-country or mini-state. Response rates — how often people are willing to pick up a cold call or answer a text from us — have been dropping for decades. The response rates for Pew Research Center’s telephone surveys plunged from 36% in 1997 to 6% in 2018. Nate Cohn of The New York Times reported a 0.4% response rate to his polls in 2022. And any pollster will tell you response rates are low this year as well.
Nonresponse makes good data rare and expensive. Polls are costly, in part, because we spend so much money sending out unanswered texts or making calls that get sent straight to voicemail. And as polls get more expensive, media organizations will either sponsor fewer surveys or opt for polls that reduce costs by cutting corners.
But even when a group can afford to field a poll, nonresponse creates huge potential data problems.
When only 1 out of 100 people take a poll, pollsters have to make statistical adjustments. Some — such as getting the right demographic mix — are easy. If a pollster just can’t reach enough Latino, working class, young or rural voters, they often give the underrepresented voters they did contact a little more weight in their calculations. Weighted polls give each demographic a more accurate amount of say, even though some groups were harder to contact.
Other adjustments are not so easy.
Suppose a pollster has the right demographic mix in their poll but mostly happens to interview, for lack of a better description, nerdy rule-followers. This pollster might miss cranky, anti-establishment Trump voters — and risk undercounting the Trump vote for the third election straight.
It’s almost impossible to directly adjust for this type of issue. The census helps us calculate how many 18- to 34-year-olds should be in a poll, but not how many cranks and nerds. So pollsters have to get creative with math — which leads to another issue.
Challenge #2: We have to model our way around the fact that nobody talks to us. That’s risky.
The most common response to this problem — a shortage of pro-Trump, anti-institution Republicans in the 2020 polls — is weighting by “recalled vote.” Essentially, pollsters ask people how they voted in 2020 and try to get the right number of Trump and Biden voters in their sample.
Everyone is using math to adjust for the sad fact that normal people don’t take surveys.
Though I’ve used this tactic in some polls, there are downsides. Respondents don’t always correctly recall whom they voted for. Every estimate of how many Trump or Biden voters will vote again in 2024 is just that — an estimate. The list goes on.
That being said, many reputable pollsters say that weighting by recalled vote improved the accuracy of past surveys. And pollsters that only weight by party — and not recalled vote — might fail to fully address problems that damaged the industry’s credibility four years ago.
There’s no right answer. Everyone is using math to adjust for the sad fact that normal people don’t take surveys. And every pollster is on edge because, if we make the smallest mistake, we’ll be punished for years.
Challenge #3: Elections are closer than ever, so “the polls” will almost certainly be “wrong.”
The last true blowout presidential win was Ronald Reagan’s 1984 re-election. The last 40 years have seen the most consistently competitive presidential races in living memory. That’s bad news for polls — which are blunt instruments rather than precision predictors.
When a pollster randomly samples the electorate, they can — through no fault of their own — accidentally pick up a few too many voters from one side or the other. When we try to poll an upcoming election, we’re making (fallible) projections about who will and won’t turn out. And there’s plenty more uncertainty — from nonresponse, decisions around weighting and more — that’s just not easy to communicate to the lay reader.
In a race this tight, in which survey after survey has Harris and Trump dead even in the swing states, a good pollster could do everything right, yet still miss the result by a point or two and face years of ridicule from a huge audience of readers.
But we pollsters can’t look at these problems, yell “it’s not fair!” and go home. I’ve built election forecast models, and I’ve seen firsthand that polls are the best tool for predicting elections. More importantly, they’re the only way to ask members of the public what they think on any question, in real time.
The problems with polling are real. Maybe at some point, nonresponse or some other issue will become unsolvable and cause a catastrophic, industrywide collapse. But unless that happens, we’ll need to keep polling, because nothing else quite does what polls do.