Polls Apart

In the field of public opinion polling, to err is human, but not to be able to figure out what caused the error afterward — that’s maddening.

Such is the situation nine months after the 2020 elections, and after several expert analyses of what could have accounted for the most inaccurate polling in a presidential contest in 40 years.

As the authors of the latest attempt, a comprehensive report from the American Association for Public Opinion Research (AAPOR) just released, put it: “Identifying conclusively why polls overstated the Democratic-Republican margin relative to the certified vote appears to be impossible with the available data.”

“Impossible” is a strong word, even when modified by “appears,” but the 19-member AAPOR task force did not reach that bold non-conclusion lightly. They went about the matter methodically. First, they looked at the data that have been hanging over the profession’s heads ever since November 3, 2020:

National polls were off by an average of 4.5 percentage points, and the state polls were even worse, missing the real outcome by over 5 points. For some reason, the polls on the eve of voting were skewed in favor of now-President Joe Biden by 3.9 points, and the state polls were 4.3 points too favorable for Biden. Former President Donald Trump’s support was underestimated 3.3 points on average (a massive miss in a nationwide election), while Biden’s was overestimated by a point.

Next, the task force tried to solve this mystery by examining every plausible explanation, ruling them out one by one.

Although reports don’t mention it, presumably they considered the possibilities of bias or outright mendacity in the polls, and rejected them (too many players involved for a conspiracy to work, their colleagues too honest).

Or that the election results themselves were false, as Hillary Clinton claimed about 2016. “You can run the best campaign, you can even become the nominee, and you can have the election stolen from you,” she said, implying that such a fate befell her in 2016, and that therefore Trump was an “illegitimate president,” according to The Washington Post in December 2019.

Then, the most likely culprit was examined — the reason widely accepted for the inaccurate polling results in the Trump-Clinton debacle of 2016 — a failure to account for varying educational levels; that Trump was surprisingly popular with voters who hadn’t earned college degrees and relatively weak with those who had. But pollsters had learned that lesson, and adjusted accordingly in 2020, so that wasn’t the reason for their error.

They then proceeded to eliminate other likely suspects, such as late deciders, who make up or change their minds on Election Day; lying voters, too embarrassed or afraid to admit their liking for the likes of Trump. Even the “Trump effect,” his attacks on polling integrity that could have made his supporters unwilling to cooperate like good statistical cohorts, foundered on the statistical rock that polls of Senate and Governor’s races were wrong by an even greater margin: 6 points on average.

The AAPOR investigators considered and ruled out everything but climate change and systemic racism. In the end, they admitted to being flummoxed by the mystery.

But they do have a pet theory, despite a lack of evidence for it: that there are two types of Republicans.

“It seems plausible to the task force that, perhaps, the Republicans who are participating in our polls are different from those who are supporting Republican candidates who aren’t participating in our polls,” Josh Clinton, a professor at Vanderbilt University and chair of the task force, told Politico. “But how do you prove that?”

It’s like in astronomy; they suspect a kind of dark matter in the Republican ether, a portion of the electorate that behaves differently from the dictates of scientific opinion sampling. They’ve never seen it and can’t prove it, but they’re pretty sure it’s out there.

If you’re gloating over the liberal pollsters being stumped as to why they grossly underestimated Donald Trump two elections in a row, don’t.

The inaccuracy seems to have been bipartisan: “It’s not clear that Republican pollsters did any better, so it’s an issue that is pretty pervasive,” said Prof. Clinton. “All partisan pollsters have the same incentive to get this right, but Democratic pollsters and Republican pollsters were off by equal amounts.”

As such, the polling community will go into the next elections with considerably less confidence than before. They realize that forces are at work which defy their best efforts to analyze and predict voter behavior.

Indeed, it’s fair to assume that from here on — even before the AAPOR report — many more voters will give less, if any, credence to what the polls say, knowing that they could be shooting in the dark.

Politicians themselves may also become more skeptical and rely less on polls to formulate their pitch to voters.

But this could be a good thing for democracy. People will be less likely to stay home, thinking the outcome is already decided, according to the polls. Candidates who would otherwise lose, might turn out winners in a bigger turnout.

It also might inspire candidates to rely less on polls to shape their campaigns, and more on an honest presentation of their positions on the issues.

We don’t know, we have no evidence for it. It’s just a pet theory.