The deepfake of Catherine Connolly, which appeared in Ireland the night before the October 2025 presidential election, represented a particular aspect of the actual arrival of democratic information ecosystems. A fake RTÉ broadcast clip “confirming” the news was included with the fake film, which showed Connolly declaring her withdrawal from the competition. Before takedowns were handled, both videos quickly gained thousands of shares on Facebook and YouTube.

Nevertheless, Connolly prevailed in the election. The ultimate election result did not mitigate the harm to the wider information environment or the Irish public’s confidence in what they were witnessing in the last hours before voting. Researchers then discovered that more than 120 photos of Irish politicians had been submitted to an AI-generated content marketplace prior to the election, indicating that the Connolly deepfake was not an isolated instance. It was a single visible component of a much broader, covertly constructed infrastructure.

AI Deepfakes and Global Elections — Key InformationDetails
Top Global Risk DesignationWEF Global Risks Report 2026
Notable Election AffectedIreland presidential election, October 2025
Targeted CandidateCatherine Connolly (eventual winner)
Specific Tactic in IrelandFabricated RTÉ broadcast claiming election canceled
Irish Politician Deepfake Library120+ images uploaded to AI marketplace
Romania 2024 OutcomePresidential election results annulled
Romania 2025 Rerun TacticDeepfakes of candidates promoting fake investment schemes
Netherlands ElectionAbout 400 AI-generated synthetic images
Moldova Reference NetworkKremlin-connected “Matryoshka” bot network
Storm 1516 Russian Campaigns77 since 2023 (French Foreign Ministry)
Reference ReportingThe Week
Recorded Future Impersonations82 across 38 countries (July 2023 to July 2024)
US Adults Expecting Escalation58%
US States With Election Deepfake Laws31 (up from 28 at end of 2025)
New 2026 State LegislationMaine, Tennessee, Vermont

There have been enough such events during the 2025 European election cycle to indicate that the pattern isn’t coincidental. After evidence of AI-linked intervention involving edited videos surfaced, Romania’s Constitutional Court overturned the results of the country’s 2024 presidential election, an unprecedented ruling in contemporary European political history. Scammers utilized Facebook to disseminate deepfake videos of multiple presidential candidates endorsing fictitious government investment programs during the 2025 rerun, combining financial fraud and political sabotage in ways that previous misinformation campaigns had not fully succeeded in.

Approximately 400 artificial intelligence-generated synthetic images were used to target political candidates during the Netherlands election cycle. A Russian-funded misinformation network targeted the September 2025 parliamentary election in Moldova by paying individuals to disseminate pro-Kremlin propaganda and using ChatGPT to provide advice on satirical framing to increase engagement. Germany, Hungary, and the Czech Republic have created their own versions of the design.

Beneath the individual occurrences, the structural underpinning has been subtly developing. Between July 2023 and July 2024, Recorded Future recorded 82 high-profile deepfake impersonations in 38 countries, of which 15.8% involved electioneering. Since 2023, 77 Russian misinformation activities connected to the “Storm 1516” operation have been monitored by France’s Foreign Ministry.

The Luma AI video platform has been used by the Kremlin-affiliated “Matryoshka” bot network to create content such as the late 2023 deepfake depicting Moldovan President Maia Sandu rejecting her government. The infrastructure needed to create convincing political deepfakes has moved from being the domain of research curiosity to becoming a commodity that can be accessed by any actor with a modicum of resources who has a credit card and internet access. By all quantifiable measures, the barrier to entry has been falling.

The story’s “liar’s dividend” is the aspect that is more significant than most media coverage recognizes. The issue was identified years before deepfakes attained their current level of quality: politicians and other actors can more readily discount genuine damning evidence as “AI-generated” as the public grows more conscious of synthetic media. In both directions, the dividend declines. Videos of actual wrongdoing can be dismissed as fraudulent. Because they are sufficiently plausible, fake videos of made-up wrongdoing can proliferate. There is a significant cumulative impact on democratic accountability.

Voters are no longer able to discern between real and fake political content with confidence. Politicians now have a means of avoiding responsibility that was not possible in earlier times. In an effort to better understand the phenomenon, researchers have discovered that neither risk awareness nor financial incentives consistently enhance people’s capacity to recognize deepfakes. This conclusion was drawn from a pre-registered iScience experiment and has been validated in several subsequent experiments.

The ‘Dark Mode’ of AI , Deepfakes, Global Elections, and the Looming Threat to Democracy
The ‘Dark Mode’ of AI , Deepfakes, Global Elections, and the Looming Threat to Democracy

The regulatory framework that has been put in place to try to solve this has been blatantly insufficient given the scope of the issue. Platforms are subject to transparency requirements under the EU Digital Services Act, although enforcement has been inconsistent. Fact-checking and editorial approval criteria for political content will be included in the planned EU Code of Practice, which is anticipated to be released in May or June 2026.

Although there is no federal law that forbids the use of deepfakes in political campaigns, the TAKE IT DOWN Act in the US created 48-hour takedown windows for intimate deepfakes in 2025. Only 31 US states had rules against deepfakes related to elections as of early 2026; this year, Maine, Tennessee, and Vermont passed additional legislation. There are irregularities in the patchwork. In the underlying technical arms race, detection technologies have lagged behind creation techniques, and despite growing public awareness of the threat, human ability to recognize synthetic media has not much improved.

Observing the accumulation of election-related deepfakes over the past 18 months, it seems as though democratic institutions are running a race they are structurally ill-prepared to win on the current trajectory. Along with geoeconomic conflict and societal polarization, the World Economic Forum’s Global dangers Report 2026 ranked misinformation and disinformation as the top short-term global dangers. It also stated that disinformation seemed to exacerbate or accelerate the majority of other risks on the list.

There are enough significant elections scheduled in Asia, Europe, and the Americas in 2026 that the combined experience will significantly influence democratic trust for years to come. In retrospect, whether the cluster of deepfake-affected elections in 2025 appears to be a transitional moment or the early stages of a much deeper crisis will depend on a number of factors, including whether the regulatory frameworks catch up, whether platform moderation improves materially, and whether public media literacy adapts faster than the underlying technology.

The results of the Irish election were unaffected by the Connolly deepfake. The next one might not be as contained in a country where the race is tighter than the polls indicated. The infrastructure has been constructed. The course is predetermined. If democracies don’t change now, they will have to defend elections under unimaginable circumstances.

Share.

Comments are closed.