Weaponizing projections as tools of election delegitimization
Authors: Joe Bak-Coleman, Melinda Haughey, Joey Schafer, Morgan Wack, Jevin West, University of Washington Center for an Informed Public;
Forecasting elections is notoriously difficult. In 2016, several models expressed an all but certain Clinton victory. FiveThirtyEight was an exception among the more established forecasters, giving Trump a seemingly-generous 30%. In the wake of the 2016 election, Modellers and pollsters alike pointed fingers, and there has been no shortage of explanations for what went wrong uncovered through post-mortems. Of the many credible explanations, there was no evidence that voter fraud or illegal electoral misconduct was to blame.
This year, however, we are beginning to see forecasts being weaponized to delegitimize the results of the election before the large majority of polls even close. While venerated forecasters like Fivethirtyeight and the Economist have shown a strong position for Biden, several contrarian models have claimed that a Trump victory is certain [1, 2]. While the FiveThirtyEight and Economist teams have taken pains to stress that a Trump victory remains possible, less scrutable data pundits have fomented the narrative that anything that is not in line with their model is evidence of widespread voter fraud, views that are then corroborated by social media users.
Caption: Sample tweet claiming that a model being wrong would be indicative of one side committing election fraud.
Under the hood, these models are at best problematic. As one example, a model making the rounds on social media, relies on a flawed comparison between national-level interest among democrats to vote-by-mail and projected vote-by-mail turn out in a few key battleground states. The problem here, of course, is that the national-level estimates are buoyed up by states like WA, where all registered voters received a mail-in ballot.
Caption: Image of the joeisdone.github.io model of Florida election voting posted to Twitter.
There are a host of other “models” that don’t appear to rely on any form of substantive methodology. Rather, they seem to employ partisan intuition to populate electoral maps like March Madness brackets. Despite releasing limited information about how these predictions are made, maps masquerading as models have been widely shared.
The more certain a model or modeler, the more suspicious one should be. It is not sufficient for identifying flawed prediction prowess, but if a forecaster leaves no room for error, this should raise red flats. Good forecasters are those that go to great lengths to specify their limitations. They are constantly de-constructing their assumptions. FiveThirtyEight’s election-eve post on why Trump might win illustrates this uncertainty, despite their models 10 percent chance of this happening.
The limitations of election forecasts boil down to two sources: the world being noisy and a given model not being a perfect representation of the world. There is nothing that can be done about the world being noisy. For example, a perfect model of a fair coin toss would be wrong half the time. In 2016, FiveThirtyEight predicted a 30% chance of Trump winning. If forced to choose a winner, it would be reasonable to go with Hilary yet they would still have been wrong 30% of the time. Moreover, much like alternative forms of misinformation, model warnings, addendums, and retractions are far less likely to be widely shared than provocative predictions.
However, forecasters can also get it wrong because their model doesn’t match the real world. An imperfect model, for example, might assume both sides of the coin are heads. The predictions would be wrong but for different reasons than noise. The models here are wrong because they cannot even conceive of tails occurring. A coin landing on tails would seem so implausible as to invite suspicion. As another way of thinking about it, if a weather man you trust claims it is going to be sunny and instead it rains on your parade, it is unreasonable to infer that a third-party has altered the weather.
As the election unfolds, this is unfortunately where we find ourselves with the most confident and therefore least credible forecasters. Their models are perfectly confident, making them necessarily wrong. Should their predicted winner lose, these models have the potential to play into the narrative that the election results are fraudulent. We’ve already seen the stage set for this narrative both by pundits and the general public alike. With the high likelihood of unknowns over the days following election night, and the possibility of “blue-shift” or “red-shifts”, the pundits may argue for a suspicious coin.
As the dust settles and votes are counted, there are likely to be surprising swings in vote totals which is a natural consequence of how votes are counted. Because we live in a noisy world, even our best models can be wrong. Results that run counter to model predictions -- all models from perfect to deeply flawed -- do not equate to illegitimacy.