This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2022/11/GettyImages-1244584129-e1668613024416.jpgAmerica’s pollsters are in denial. With Democratic control of the Senate confirmed after the AP’s call this weekend of a Democratic win in Nevada, obviously the much vaunted “red wave” predicted by the pollsters failed to materialize–yet the pollsters are rushing to spin fact-free revisionist narratives asserting otherwise.
Quantitative statistics and data can often present ambiguous situations with a veneer of objective, unimpeachable fact–which makes it even more disappointing when statistical integrity is twisted or misunderstood.
For the past nine months, we have worked assiduously to correct the false numerical narratives of Putin’s propaganda on everything ranging from dubious Russian national income statistics to the number of companies that have actually pulled out of Russia to the supposed resilience of the Russian economy.
Unfortunately, closer to home, many media commentators regard the election forecasts put out by the domestic political polling industry as the product of highly sophisticated data analysis, providing breathless horse-race coverage based on who is up and who is down in the most recent poll, when in reality their practices often veer more towards unsupported assumptions and sophistry.
Great expert resources such as the National Opinion Research Center, Pew, and Edelman have better methods, larger samples, and avoid daily headline-driven overnight readings. Some such as the Harris Poll and Morning Consult are rather nuanced and accurate. However, media pundits and forecasters jam weaker outlets and partisan pollsters with reputable institutions together in their analysis.
The GOP-funded Trafalgar Group, as Slate showed, not just heavily failed in their overall calls but wrongly pronounced swings to the GOP among millennials and Hispanics when the opposite happened.
Two years ago, the New York Times warned that “Trafalgar does not disclose its methods, and is considered far too shadowy by other pollsters to be taken seriously.” Undeterred, however, polling aggregator Nate Silver’s site rated them an A-.
Most pundits and pollsters got it wrong in 2018, 2020, and 2022–not because their artificial intelligence systems failed, but because none of us can learn if we cut off true facts and hide in a haze of denial. For example, Nate Cohn from the NYT argued: “I’m surprised by the amount of griping about the polling that I’m seeing. The polls did pretty well! The ‘traditional’ polls did *really* well. Doesn’t get much better.”
Perhaps these pollsters should take a closer look at their own polls. Take the Senate side alone:
- The average poll in the week before election day had Mehmet Oz beating John Fetterman by nearly 1% in Pennsylvania when in reality Fetterman beat Oz by nearly 5%
- The average poll had Adam Laxalt beating Catherine Cortez Masto in Nevada by 1.5% when in reality Cortez Masto is projected to win. In fact, not a single poll in the week before election day projected a Cortez Masto victory.
- The average poll had Herschel Walker beating Raphael Warnock in Georgia by 1% when in reality Warnock outperformed Walker by 1%; and not a single poll in the week before election day projected a Warnock victory
- The average poll had Maggie Hassan beating Don Bolduc in New Hampshire by only 2% when in reality Hassan soundly routed Bolduc by 15%. Two mainstream polls in the week before election day, including the seminal, admired Saint Anselm poll, even predicted Bolduc victories
- An updated prediction, published right before election day by the University of Virginia’s Department of Politics, noted that the Senate races in Georgia, Arizona, Nevada, and Pennsylvania remain “jump balls”. However, the nonpartisan election handicapper shifted its rating in Pennsylvania and Georgia to “leans Republican.” And it shifted its rating for four of the six state gubernatorial elections from a “toss-up” to “lean Republican.”
- Gallup confidently declared “The political environment for the 2022 midterm elections should work to the benefit of the Republican Party, with all national mood indicators similar to, if not worse than, what they have been in other years when the incumbent party fared poorly in midterms.”
- The Siena poll found that “independents, especially women, are swinging to the G.O.P. despite Democrats’ focus on abortion rights. …The biggest shift came from women who identified as independent voters. In September, they favored Democrats by 14 points. Now, independent women backed Republicans by 18 points–a striking swing given the polarization of the American electorate and how intensely Democrats have focused on that group and on the threat Republicans pose to abortion rights.”
The misses were even more egregious when it came the House and Governors races. As one example of many, the average poll in the Arizona gubernatorial race in the week before election day had Kari Lake winning by 2.4%, with not a single major poll calling a Katie Hobbs victory.
Beyond any individual race, polls seriously misread the mood of the country and the salient issues on voters’ minds. Pre-election polls largely found that voters were apathetic to the issue of democracy and receptive to voting for election deniers, with pundits lambasting President Biden’s pre-election speeches on democracy accordingly.
Evidently the pollsters were wrong. Many of the most vocal election deniers were soundly defeated–ranging from Mark Finchem in Arizona to Jim Marchant in Nevada to Tim Michel in Wisconsin to Kristina Karamo and Tudor Dixon in Michigan to Doug Mastriano in Pennsylvania–even though the first four were generally leading in pre-election polls.
Of course, election surprises come with the territory–and nobody really knows what is going to happen until all the votes are tallied up. But increasingly egregious polling misses year after year call for increased scrutiny into the shortcomings of modern polling sources and methods, as well as a better and more realistic understanding of what pollsters can and can’t know.
Assuming turnout is pseudoscience
Pollsters can only extrapolate the turnout rates of previous years. The last couple of election cycles have seen record turnout across both sides of the aisle, especially with younger voters, lessening the value of already-displaced historical precedents.
Hard prior is elusive
What you hear depends on who you ask. Some polls sample most likely voters, while others survey all registered voters or even all citizens–which can produce vastly different results. Pollsters only have rough exit poll data to work with across demographic breakdowns–and not official data by age, gender, religion, race/ethnicity, marital status, household size, income, employment, education, party, and ideology. With extremely low clarity to begin with, pollsters have to make unilateral assumptions around these crucial demographic weightings much more than they’d like to admit.
Voter response bias
The sheer number of pollsters–which has exploded over the last 20 years–creates voter fatigue, tedium, and less willingness to respond for privacy and social desirability reasons.
Pollsters are highly aware that some types of voters are more likely to respond than others– having learned from the 1936 Alf Landon mis-call and the mistakes of the Dewey-Truman era– and thus use a propensity score to adjust for respondents’ propensity to be online. This too calls for unilateral assumptions without any grounding in actual voting data. Even the smallest tweaks in these base assumptions and filtering algorithms would significantly alter tenor of the polling results.
Sampling methods
Pew has documented that telephone response rates have fallen below 9% which is not considered close to valid measurement in any social science fields. Online surveying can be more problematic as there there is no national list of email addresses from which people could be sampled. Thus there is no systematic way to collect a traditional probability sample of the general population relying the internet.
Sample size
With the exception of Edelman, the response sample sizes are often far too small with most polls surveying less than 1,000 people–sometimes only a few hundred. Making things worse is the narrow overspecification asking for more than what the data can give. A sub-category with seven respondents gives nothing but noise.
Wording bias
Poorly phrased questions can create discrepancies between what pollsters sought to measure and how audiences interpret the question, a phenomenon social science researchers call “demand characteristics.” This is worsened by the fact many pollsters provide only two possible answers to a question in lieu of a more representative and comprehensive Likert scale, eliminating the central tendency and artificially reducing a spectrum of responses towards dichotomous poles.
Drama seeking
The motives of the pollsters–and their sponsors-can be questionable, with tradeoffs between attention and accuracy. Not only are many polls commissioned by partisan groups with obvious biases, but some polling outfits also use provocative polling results to gain the prominence of stature and the expert academic authority that they lack. High-profile polls help lower-profile institutions compete commercially in the attention economy.
It is these methodological shortcomings and constraints which should draw greater attention rather than the breathless horse-race coverage based on who is up and who is down in the most recent polls across media.
As Jim Fallows notes, “if any professionals were as off base, as consistently, as political “experts” are, we’d look for someone else to do those jobs.”
Predictive political polling is helpful–as long as we bear in mind its constraints and limitations. There are enough known unknowns inherent to political polling methods, not to mention any unknown unknowns, which relegate them to more of an art than a science.
Without contrition, Nate Silver, one of the prominent pollster pundits who got it wrong was back at it this weekend, offering predictions on the Georgia U.S. Senate sweepstakes. The only lesson for pollsters seems to be: If you can’t predict accurately, predict often.
Jeffrey Sonnenfeld is the Lester Crown Professor in Management Practice and Senior Associate Dean at Yale School of Management. Steven Tian is the director of research at the Yale Chief Executive Leadership Institute.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
More must-read commentary published by Fortune:
Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.