When voters walked into polling places across key states on Election Day 2025, they faced more than just long lines, with a wave of concerning manipulation tactics—emails, robocalls, bots and generative-AI personas—looming over the entire process, aiming not necessarily to change ballots but to sow doubt, delay votes, and amplify divisive narratives. Let’s take a look at what we saw, what it means, and how we can prepare for bigger threats down the road.
Baseline risk environment
Before we look at what happened leading up to this week’s elections, it’s helpful to get an idea of our baseline risk environment, looking to the recent past for clues about what threats may materialize and where we see them.
Democracies at risk. The World Economic Forum’s 2024 and 2025 Global Risks Report ranked “misinformation and disinformation”—amplified by generative AI—as the top short-term global risk, warning of election disruption across democracies.
Foreign influence operations still probing U.S. audiences. In the 2024–2025 cycle, researchers repeatedly documented Russian and Chinese state-linked campaigns using fake media sites, AI-generated content, and inauthentic accounts—tactics that remain relevant this evening as narratives get reheated and localized. Examples include:
Chinese “Spamouflage” assets posing as U.S. voters to push divisive messages.
Russia-linked networks using AI content and spoof sites to sway European electorates (Germany), a playbook mirrored against U.S. audiences.
Foreign disinformation looks local. Recent ops impersonate local voters and media, using AI-generated sites and sock puppet accounts to launder narratives that originate abroad.
Increased attack surface. Platform policy shifts have increased the attack surface in recent years. Two structural changes from the past 18 months are particularly relevant here:
YouTube stopped removing content that falsely claimed fraud in past U.S. presidential elections, narrowing enforcement to future-facing suppression and eligibility claims. That rollback correlated with more election denial content ahead of the 2024 vote.
X (Twitter) has faced pressure from officials in the U.S. and Europe over its Grok AI providing false voting info in 2024; secretaries of state publicly pushed Musk to fix it. Those concerns carry into 2025 given X’s reduced guardrails and the platform’s centrality to political rumor cascades.
Polling place threats spark procedural confusion
In New Jersey and parts of New York City, election officials on Tuesday reported a string of emailed bomb threats to polling locations. The threats proved to be hoaxes, but they triggered temporary closures, relocations and orders to extend hours. For example, in Middlesex County in New Jersey, a minor was arrested in connection with one of the incidents. Officials emphasized that voting largely proceeded but with added strain and uncertainty.
Such “swatting”-style tactics represent a growing variant of election interference: the intent appears less about hacking vote counts and more about disrupting the process, increasing voter uncertainty, and degrading trust in the system. Furthermore, such tactics allow the losing side to claim that the election wasn’t fair and even to call for recounts or a new election entirely, though this hasn’t happened in this year’s elections.
Notably, we saw similar “non-credible” bomb threats targeting the 2024 election. According to officials, those threats came from Russia. These bomb threats fit into a broader pattern of increasing political violence in America — a pattern that I have written about on Weaponized Spaces.
Social platforms, bots and algorithmic echo-chambers
The digital terrain around the election has become as important as the physical one. Researchers at the Brennan Center for Justice warn that the run-up to the 2024 and 2025 elections saw the spread of targeted Spanish-language misinformation, coordinated inauthentic networks, and AI-enabled content as major threats.
Despite the hype surrounding dramatic AI-deepfake operations, the big disruptions of previous elections did not fully materialize—but this may reflect successful mitigation rather than absence of threat. Additionally, it’s possible that deepfakes work better for national elections, when the candidates are widely known (as opposed to local elections, where candidates may only be known in a certain town or county). It may also be that foreign disinformation campaigns are focused more on the national stage, finding it difficult to learn the nuances of local politics.
On the micro-level, platforms such as X (formerly Twitter) have been under increasing scrutiny. A recent report from the Center for Countering Digital Hate found that X’s “Community Notes” feature—designed to crowd-correct misleading posts—failed in ~74% of the 283 misleading posts examined, which together had garnered 2.2 billion views.
Thanks for reading Weaponized! This post is public so feel free to share it.
Importantly, academic work describes a new class of “sleeper bots” — automated accounts using generative AI to mimic authentic user behavior, engage in discussions, and then seed disinformation when the time is right. A University of Southern California study found these bots could convincingly pass for humans in experiments.
The role of generative AI: hype vs. reality
What about deepfakes and voice clones? While the doomsday versions of AI-election sabotage have yet to materialize in high-stakes elections, policy researchers say the foundations are laid for the next cycle. According to the Brennan Center, though the worst-case scenarios have not come to pass, the effects are likely to be greater in the future.
Indeed, experts identify three major risks posed by AI:
Mass-produced fake content (images, audio, video) that can impersonate candidates or officials.
Personalized micro-targeting enabled by AI that adapts tone or message to small demographic slices.
Synthetic personas—bots that appear human, engage in discussion and build credibility before deploying influence.
A 2023 poll found that 58% of Americans believed AI would increase misinformation in elections, while only 6% believed it would reduce it.
Types of misinformation
When examining information-based election threats, it’s helpful to distinguish between three categories of threats:
Misinformation: false or inaccurate information shared without intent to deceive.
Disinformation: deliberately fabricated falsehoods shared with intent to mislead.
Malinformation: genuine information intentionally amplified or weaponized to harm people or institutions.
In the 2025 campaign environment, three categories of tactics seemed to dominate:
Suppression-style tactics including the bomb-hoax threats designed to delay or discourage voting.
Distortion operations including AI-generated audio/visual content and deep‐faked narratives about ballot processing or candidates.
Foreign-linked inauthentic networks mirrored patterns seen in Germany, Romania and other democratic states: state-sponsored or proxy accounts amplifying discord rather than advocating for a specific candidate.
Why this matters for democracy
Election integrity is not just about ensuring the vote count is accurate. It’s also about ensuring that the process is trusted and the information environment is relatively level. When voters believe the system is unreliable or manipulated, participation and legitimacy suffer.
Platforms face tremendous pressure: remove too little, and false narratives proliferate; remove too much, and accusations of censorship arise. States and the federal government are responding: the Federal Communications Commission (FCC) declared AI-generated robocalls illegal under current law, and states such as New Jersey passed laws criminalizing deceptive AI media related to elections.
What we saw on Election Day 2025
Voting largely proceeded despite the bomb threat hoaxes.
No major reports (yet) of centralized AI-deepfake sabotage or ballot-system hacks, though the threat environment remains elevated.
The information-space remains noisy: bots, synthetic accounts, viral rumors, and lowered platform moderation all raise the baseline risk for confusion and distrust.
The vast majority of online disinformation and hate targeted just a handful of candidates, including Mayor-elect Zohran Mamdani of New York City. Most of the disinformation posts targeting him were Islamophobic in nature, followed by accusations of antisemitism and support for Hamas.
There is a dire need for real-time monitoring, quick correction, and “good information” to fill the voids left by falsehoods.
Looking ahead
If Election Day 2025 serves as a warning-shot rather than a full-blown meltdown, the next cycle may bring more sophisticated disruption. The tools are ready: generative-AI, micro-targeting and global inauthentic networks. Democracies must improve resilience by strengthening media-literacy, cross-platform coordination, clearer accountability for social media firms, and rapid response architecture for election officials.
As voters cast ballots, they may not see the bots or hear the synthetic voices—but this week’s quiet manipulations still set the tone for tomorrow’s louder battles.
.png)

