When Seeing Is No Longer Believing
The Threshold
In October 2025, less than four days before Ireland's presidential election, a deepfake video appeared online showing a political candidate announcing her withdrawal from the race. The video included fabricated footage of national broadcasters confirming the news. The candidate called it a disgraceful attempt to mislead voters. By the time it was debunked, it had been shared widely enough to require an official public correction.
This was not an isolated incident. In the same year, AI-generated images featured in four of 23 viral videos containing disinformation about voter fraud during Poland's presidential election. In Buenos Aires, deepfakes falsely claiming a candidate had withdrawn appeared hours before polls opened. In South Korea, electoral authorities filed complaints against YouTubers who uploaded deepfake smears of political candidates days before voting.
These are not future risks. They are current events. According to the World Economic Forum, deepfakes crossed a critical threshold in 2026: they have eliminated the glitches that used to give them away, and they are now accessible to anyone with a smartphone. The question is no longer whether AI-generated content can deceive. It is whether the institutions responsible for distinguishing truth from fabrication can adapt fast enough to remain relevant.
The Scale of Synthetic Content
The volume of AI-generated false content has grown at a pace that defies easy comprehension.
NewsGuard, a firm that tracks the reliability of online sources, identified more than 1,200 AI-generated news and information sites operating by mid-2025. These sites publish under names like iBusiness Day and Ireland Top News, in 16 languages, with little to no human oversight. This represents a more than twentyfold increase in AI-generated news sites in two years.
The European Parliamentary Research Service estimates that the number of deepfake videos shared online surged from approximately 500,000 in 2023 to 8 million by 2025, a sixteenfold increase. BBC Verify has reported a surge of AI-generated war imagery during the Iran conflict, with fabricated videos attracting hundreds of millions of views and some creators monetising the viral content.
The economics are straightforward. Before generative AI, mounting a disinformation operation required either significant funding or an army of low-paid workers. Today, as the Bulletin of the Atomic Scientists has documented, the same output can be achieved with relatively cheap keystrokes. The barriers to producing sophisticated fabricated content have fallen to nearly zero.
The Disaster Amplifier
One of the most disturbing applications of AI-generated content has been its deployment during real emergencies.
When a UPS plane crashed during takeoff in Louisville, Kentucky, fake AI-generated articles and videos began circulating on social media before investigators had reached the site. One fabricated video, showing fake firefighters struggling with a fake fire beside a fake fuselage, was shared more than a thousand times. It concluded with a disclaimer that it was for educational purposes only.
The platform response made the problem worse, not better. X's AI assistant, Grok, claimed that a verified photograph of Kentucky's governor at the crash site was actually from a previous disaster. The tool designed to help users evaluate information actively undermined the credibility of real evidence.
This pattern recurs with depressing consistency. Following the capture of Venezuelan president Nicholas Maduro by US forces, misleading AI-generated content racked up millions of views before verified reporting could catch up. The speed advantage that characterised social media misinformation has been multiplied by tools that can generate convincing fabrications in seconds rather than hours.
The Cognitive Cost
The damage is not limited to the moments when a specific fabrication deceives a specific viewer. There is a broader, more insidious effect that researchers have begun to document with precision.
Studies from the Reuters Institute and the University of Michigan show that exposure to hyperrealistic misinformation undermines confidence in distinguishing fact from fiction, breeding cynicism and what scholars describe as truth fatigue. The condition is not one of being deceived. It is one of being exhausted by the effort of not being deceived. When every image, video, and audio clip might be synthetic, the rational response is to distrust everything. And when everything is distrusted, nothing can be verified. The liar and the journalist occupy the same epistemic ground.
Research published by Cornell University in 2025 found that the use of large language models can lead to cognitive atrophy and reduced brain elasticity, as neural networks related to memory become underused. A separate 2025 study found that LLM use reduced critical thinking scores. The tools designed to help people process information may be diminishing their capacity to process it independently.
This is the deeper threat. Not that any single deepfake will change an election — the evidence on that specific question is actually mixed. Princeton researchers concluded that AI-enabled misinformation in the 2024 elections largely succeeded only with those who already agreed with the message. The real danger is cumulative: a gradual erosion of the shared assumption that reality can be established through evidence, verified through institutions, and agreed upon across political lines.
The Regulatory Landscape
Governments have responded to the deepfake crisis with varying degrees of urgency and competence.
The European Union's AI Act, expected to fully apply from August 2026, requires labelling of AI-generated content and disclosure of synthetic interactions, with fines of up to 6 percent of global revenue for non-compliance. Article 50 mandates transparency measures that, if enforced, would represent the most comprehensive regulatory framework yet applied to synthetic media.
South Korea's AI Basic Act, which entered into force in January 2026, establishes a multi-layered legal framework governing AI-generated content. China's Deep Synthesis provisions, enforced since 2023, require clear labels on synthesised media, though these regulations operate within a framework of state information management that raises its own questions.
The United States has moved in the opposite direction. A Trump administration executive order has kept states from regulating much of AI. Federal funding for misinformation research was halted in early 2025. Researchers at institutions including MIT, whose work on detection tools had shown promising results, lost their federal support.
The regulatory asymmetry is significant. The countries investing most heavily in AI development are investing least in governing its misuse. The markets where synthetic content proliferates fastest are the ones with the fewest enforceable rules.
The Paradox of Trust
There is one finding in the research that complicates the narrative of inevitable decline.
A field experiment conducted with thousands of readers of Germany's Suddeutsche Zeitung found that when people were exposed to the challenge of distinguishing real images from AI-generated ones, their engagement with the trusted news outlet increased, not decreased. Awareness of AI's capacity for fabrication made credible journalism more attractive, not less. Readers who found the deepfake quiz difficult showed the largest increases in online engagement with the newspaper.
The researchers' conclusion is important: when the threat of misinformation becomes salient, the value of credible news increases. The prospect of fabrication does not inevitably drive audiences away from journalism. It can drive them toward sources they trust, precisely because the need for reliable verification grows in proportion to the volume of unreliable content.
This finding suggests that the crisis of synthetic media, while genuinely threatening, may also create an opportunity for institutions and platforms that can demonstrably earn trust. The threshold for trustworthiness rises with the sophistication of fabrication, which means that outlets cannot stand still. But the demand for credibility has not vanished. It has, if anything, intensified.
The challenge is structural. Maintaining the level of verification that earns trust requires funding, expertise, and institutional commitment at a moment when all three are under pressure from the same economic and political forces documented throughout this analysis.
The Detection Arms Race
Technical countermeasures exist, and they are improving.
Deepfake detection now works by layering multiple methods: analysing pixel-level artefacts, tracking distribution patterns through bot and troll networks, and monitoring account metadata such as creation dates and posting rhythms. Content provenance initiatives, including Content Credentials, aim to establish tamper-evident chains of authenticity for photographs and videos from the moment of capture.
These tools are necessary. They are not sufficient. Detection is inherently reactive, and generative models improve faster than detection models can adapt. More fundamentally, technical solutions address the supply of misinformation without addressing the demand. As one researcher put it, unless the human hunger for stories that confirm existing beliefs is addressed, detection tools are fighting a shadow.
What This Means
The era in which seeing was believing is over. It ended not with a dramatic announcement but with a gradual accumulation of capability until the tools for fabrication outpaced the tools for verification.
The implications are not abstract. They affect how elections are conducted, how disasters are understood, how wars are perceived, and how individuals make decisions about their health, their finances, and their civic participation. Every domain of public life that depends on shared facts is vulnerable to the erosion of confidence in those facts.
The response cannot be purely technological, purely regulatory, or purely individual. It requires all three: detection tools that keep pace with generation tools, regulatory frameworks that impose accountability on platforms and producers, and a public that has both the media literacy and the institutional support to navigate an environment where the default assumption about any piece of content must now be uncertainty rather than trust.
That is an uncomfortable place to live. But it is where we are.