OnyxWeekly

Where depth meets the headlines

Deepfakes: EU Regulation and the Threat to Democracy

The Scale of a 21st-Century Threat

The rapid development of deepfakes highlights one of the most pressing legal and democratic issues of the 21st century. What began as a niche phenomenon on internet forums in 2017 — when anonymous users first used deep learning algorithms to swap celebrity faces into video content — has metastasized into a global crisis touching finance, politics, personal safety, and the very foundations of democratic governance. According to the legal definition adopted in the European AI Act Regulation (Article 3(60)), deepfakes are image, video, and audio content produced or altered by artificial intelligence that bears a striking resemblance to reality and could give someone the false impression that it is genuine or true.

The numbers paint a stark picture. Cybersecurity firm DeepStrike estimates that deepfake files surged from approximately 500,000 shared across social media in 2023 to a projected 8 million by 2025, consistent with an annual growth rate of roughly 900%. The trajectory is not merely one of volume but of sophistication. Voice cloning has crossed what researchers call the "indistinguishable threshold" — a few seconds of audio now suffice to generate a convincing clone complete with natural intonation, rhythm, emotion, pauses, and breathing noise. Consumer-grade tools from companies like OpenAI and Google have pushed the technical barrier almost to zero, meaning anyone can describe an idea, let a large language model draft a script, and generate polished audiovisual media in minutes.

The human capacity to detect these fabrications is alarmingly poor. Humans can correctly identify high-quality deepfake videos only 24.5% of the time — barely better than random chance. A 2025 iProov study found that only 0.1% of participants could correctly identify all fake and real media shown to them, meaning virtually no one possesses reliable natural detection abilities. Compounding the problem is a dangerous confidence gap: approximately 60% of people believe they can successfully spot a deepfake, yet actual performance plummets to 24.5% for video.

The financial impact is severe and accelerating. Deloitte's Center for Financial Services projects that fraud losses in the United States facilitated by generative AI will climb from $12.3 billion in 2023 to $40 billion by 2027, representing a compound annual growth rate of 32%. Individual incidents can be devastating: in February 2024, a finance worker at engineering firm Arup was tricked into wiring $25 million after participating in a deepfake video conference call where every other participant was a synthetic impersonation. The global deepfake AI market itself — encompassing both creation and detection technologies — is projected to surge from $0.85 billion in 2025 to $7.27 billion by 2031, registering a compound annual growth rate of 42.8%.

Perhaps most disturbingly, the overwhelming majority of deepfake content by sheer volume is non-consensual intimate imagery. Estimates consistently show that 96–98% of all deepfake videos online fall into this category, disproportionately targeting women and, increasingly, minors. In South Korea, roughly 297 deepfake sex crime cases were reported in just seven months of 2024, nearly double the figure from 2021, while "nudify" bots on Telegram reached approximately 4 million monthly users in the country by late 2024.

The European Regulatory Architecture

At the European level, EU legislation seeks to balance technological capabilities with the need to protect fundamental rights and the democratic process. Rather than relying on a single statute, there is no unified EU-level law on deepfakes; the EU instead regulates deepfakes through several interconnected instruments, relying on a developed content-moderation framework. The two primary pillars are the AI Act and the Digital Services Act, with the GDPR forming a foundational base. Key legislation in this direction includes:

The AI Act: Transparency by Design

The Artificial Intelligence Regulation (AI Act), adopted in 2024 and entering full application on 2 August 2026, represents the world's first comprehensive, rights-centered regulatory framework for AI systems. It operates through a risk-based classification system that subjects different AI applications to proportionate levels of oversight.

At the apex of the hierarchy are AI practices classified as posing "unacceptable risk," which are outright prohibited. These include systems designed to exploit vulnerabilities, deploy subliminal manipulation techniques, or enable social scoring. The Act establishes severe penalties that scale with violation severity and organizational size: prohibited practices trigger fines up to €35 million or 7% of worldwide annual turnover, whichever is higher. Non-compliance with high-risk system requirements faces penalties up to €15 million or 3% of turnover. Even transparency violations can result in €7.5 million or 1.5% of turnover.

For deepfakes specifically, the AI Act introduces systematic transparency and labeling requirements through Article 50. AI systems that create synthetic content must mark their outputs as artificially generated, and companies must inform users when they are interacting with an AI system. Providers of generative AI systems must ensure that outputs — whether audio, image, video, or text — are marked in a machine-readable format and are detectable as artificially generated or manipulated. These technical solutions must be effective, interoperable, robust, and reliable. Deployers must disclose when AI is used to create realistic synthetic content, including deepfakes, by clearly informing users that such content is artificially generated or manipulated. In practice, this means deepfakes must be labeled even when the content is lawful; however, where content is evidently artistic, creative, satirical, or fictional, only minimal and non-intrusive disclosure is required.

On 17 December 2025, the European Commission published the first draft of the Code of Practice on Transparency of AI-Generated Content, a significant milestone in operationalizing these obligations. Developed through collaborative effort involving hundreds of participants from industry, academia, civil society, and Member States, the Code emerged from two Working Groups established in November 2025, with the drafting process incorporating 187 written submissions from a public consultation, three workshops, and a review of expert studies. Among the most notable proposals is the development of an "EU common icon" — a symbol enabling people to identify at a glance whether an image depicting an apparently real event or person has been created or edited using AI, while providing access to further information. The Code is expected to be finalized by mid-2026, just ahead of the transparency obligations becoming legally binding in August 2026.

The Digital Services Act: Platform Accountability

The Digital Services Act (DSA), which has been fully applicable since February 2024, addresses deepfakes from a different but complementary angle: platform responsibility. Without prejudice to obligations relating to illegal content, the DSA requires very large online platforms — those exceeding 45 million users in the EU — to identify and mitigate the systemic risks that may arise from the spread of deepfake content, in particular the risk of real or foreseeable negative effects on democratic processes, political debate, and electoral processes, including through misinformation. Very large platforms must undergo independent audits and risk assessments to limit the spread of harmful content.

The DSA's practical significance was demonstrated dramatically during the Romanian presidential election crisis. On 17 December 2024, the European Commission announced it was formally investigating whether TikTok had breached the DSA by failing to properly mitigate risks to Romania's presidential election. The investigation — still ongoing — represents the first major enforcement test of the DSA's election-integrity provisions and could set precedents for how platforms are held accountable for algorithmic amplification of synthetic content during electoral periods. Companies face fines of up to 6% of their annual global revenue for DSA non-compliance.

GDPR: The Data Protection Foundation

European legal culture is not limited to content regulation, but seeks to integrate issues of copyright, data protection, and strict liability into a coherent framework without impeding the right to freedom of expression and the right to freedom of art, often leveraging existing legal principles to address deepfakes.

The General Data Protection Regulation, though not designed with deepfakes specifically in mind, provides a critical layer of protection. Creating a deepfake often involves using an individual's likeness, which qualifies as personal data under Article 4(1) of the GDPR. Processing such data without lawful basis or consent violates GDPR principles, potentially attracting hefty fines and legal action. Moreover, deepfakes that depict identifiable individuals may involve the processing of biometric data — a special category under Article 9 that is subject to heightened protections. A deepfake, although fictional, counts as personal data under Article 4(1) of the GDPR, since it relates to an identified or identifiable natural person.

EU citizens can invoke their "right to erasure" under Article 17 to demand the removal of unauthorized deepfakes depicting them. Companies that misuse someone's likeness for AI can face GDPR fines of up to €20 million or 4% of annual global turnover.

Emerging Copyright Dimensions: The Danish Initiative

A novel dimension in the European legal response emerged in 2025 when Denmark proposed pioneering amendments to its Copyright Act during its Presidency of the Council of the EU. The proposed amendments would protect people's personal characteristics — including appearance and voice — through copyright law, requiring consent from the person being imitated and providing protection lasting 50 years after death. While several Member States have acknowledged the importance of protecting image, voice, and likeness, some have questioned whether copyright is the appropriate legal vehicle. The European Parliament is expected to vote in spring 2026 on an own-initiative report regarding copyright and generative artificial intelligence, which may further clarify the EU's approach.

This patchwork of interconnected instruments — the AI Act's transparency requirements, the DSA's platform accountability, the GDPR's data protection baseline, and emerging copyright frameworks — reflects a distinctly European approach: comprehensive, multi-layered, and rights-centered, yet acknowledging the inherent difficulty of governing a technology that evolves faster than legislation can follow.

The Democratic Dimension: When Deepfakes Enter the Ballot Box

The democratic dimension of the deepfake challenge is crucial: the ability to produce and disseminate deepfakes featuring public figures expressing false political positions or fabricated events can erode trust in institutions and disrupt electoral processes. The experiences of European countries in recent election cycles demonstrate that this is not a theoretical risk but a present and escalating reality.

Slovakia 2023: The First Election "Swung by Deepfakes"?

The 2023 Slovak parliamentary elections thrust the small Central European country into the global spotlight as a cautionary tale. Two days before the election, a fake audio clip surfaced purportedly capturing Robert Fico's main rival, pro-European candidate Michal Šimečka, discussing electoral fraud with a prominent journalist. Although both quickly denied its authenticity, the clip went viral. The timing was devastating: the deepfake was released during Slovakia's electoral "silence period" — a legal provision prohibiting media discussion of election developments in the final 48 hours before voting — which severely limited the ability of the targeted parties and media to respond or debunk the fabrication.

The disinformation campaign involved multiple deepfakes: an earlier audio impersonated then-President Zuzana Čaputová endorsing an extremist candidate, while another falsely claimed Šimečka's party intended to increase beer prices dramatically. The audio deepfake depicting the alleged conversation about election fraud spread primarily through Telegram before jumping to Facebook, where it reached tens of thousands of users. Because the deepfake was audio rather than video, it exploited a loophole in Meta's manipulated-media policy, which at the time only covered faked videos.

Šimečka's loss, despite leading in the polls, fueled speculation about the election being "the first swung by deepfakes." While Šimečka himself acknowledged the deepfake "probably had some effect," researchers at Harvard's Misinformation Review caution against a simplistic narrative, emphasizing that the deepfake's impact must be understood within the context of long-term Russian influence operations and historically low public trust in Slovak media. The incident nonetheless revealed critical gaps in social media platforms' response mechanisms and the vulnerability of electoral silence periods in the digital age.

Romania 2024: A Presidential Election Annulled

If Slovakia was a warning, Romania became a turning point. On 6 December 2024, Romania's Constitutional Court annulled the outcome of the country's presidential first-round elections — a historic decision highlighting the growing difficulties that information integrity, hybrid warfare, illicit political finance, and digital technologies pose to elections. Romania became the first European country to cancel a presidential election due to digital interference.

The crisis centered on Călin Georgescu, a fringe far-right candidate with anti-Western, pro-isolationist positions who had polled at just 5% before surging to first place in the initial tally. Investigations by the Romanian state security council uncovered extensive cyberattacks, including over 85,000 attacks against electoral IT infrastructure, as well as coordinated amplification of Georgescu's messaging through AI-generated content, bot networks, troll farms, Telegram channels, and other algorithm-manipulation techniques. Stolen election server credentials were found on Russian forums.

The Constitutional Court's decision to annul rested on the finding that these activities had significantly distorted the information environment. The ruling signified an important shift; whereas historically annulments had been associated with clear procedural errors or fraud, the Romanian decision highlighted the impact of the extensive deployment of AI, automated systems, and coordinated information integrity campaigns on electoral integrity. The decision prompted the European Commission to open a formal DSA investigation into TikTok, which had served as the primary platform for Georgescu's viral campaign — one that had amassed 62 million views in a single week.

A new election was held in May 2025, with pro-European independent Nicușor Dan ultimately winning the presidency with 53.6% of the vote. Romania's experience now serves as a precedent and a warning for democratic systems worldwide.

The 2024 European Parliament Elections and Beyond

The pattern extended to the European Parliament elections of June 2024, conducted against the backdrop of the World Economic Forum naming AI-generated disinformation as one of the greatest short-term global risks. Investigations by DFRLab, Alliance4Europe, and AI Forensics identified 131 pieces of undeclared AI-generated or AI-manipulated content circulating across Instagram, Facebook, X, Telegram, and Vkontakte in the weeks before the vote, used by several far-right parties in France, Italy, and Belgium despite having pledged to respect ethical campaigning standards.

The phenomenon has continued to accelerate in 2025. During the German parliamentary elections in early 2025, the Russian-linked "Storm-1516" operation used AI to create over 100 fake websites pushing deepfakes and fabricated stories targeting political figures. Shortly after the first round of the May 2025 Polish presidential election, AI-generated images featured in four of 23 viral videos containing disinformation alleging voter fraud, none of which included disclosure labels. Research across the 87 countries that held elections since 2023 found that 33 have experienced deepfake-related cases.

Perhaps most troublingly, deepfakes are compounding an existing crisis of gender-based political violence. Italian Prime Minister Giorgia Meloni was targeted by pornographic deepfakes and is seeking €100,000 in damages. Across elections in India, Indonesia, and Mexico, AI-generated defamatory images have specifically targeted female candidates, amplifying misogynistic stereotypes and raising questions about who can safely participate in public life.

Implementation Challenges and the Road Ahead

The European approach attempts to balance the protection of freedom of expression with the need to ensure truth and democratic legitimacy, but it does not overlook the significant difficulties of implementation in practice. Several structural challenges remain.

The first is temporal: regulatory timelines do not match technological velocity. The AI Act's transparency obligations for deepfakes only become legally binding in August 2026 — and the European Commission's "Digital Omnibus" proposal of November 2025 may push some provisions even further to 2027. Meanwhile, real-time deepfake manipulation combining live video and voice is already emerging, marking a shift from static or pre-recorded fakes. Voice phishing jumped 442% in late 2024 alone, driven by increasingly convincing vocal impersonations.

The second challenge is jurisdictional. Deepfakes created outside the EU can be disseminated globally, complicating enforcement against anonymous actors operating from non-EU jurisdictions. The DSA's reach over platforms operating in Europe is broad, but content moderation remains reactive rather than preventive, and investigation timelines are slow — as the still-ongoing TikTok investigation demonstrates. Detection tools' effectiveness drops by 45–50% when used against real-world deepfakes outside controlled laboratory conditions.

The third is political. Several MEPs elected partly through the use of unlabeled AI content now hold positions on committees directly responsible for shaping the EU's response to disinformation, digital regulation, and AI, creating what observers have called a fundamental paradox of European AI governance. The Identity and Democracy group, which had signed the 2024 European Parliament Elections Code of Conduct, dissolved and reformed as Patriots for Europe after the election — without carrying over its prior commitments.

The fourth is the phenomenon scholars call the "liar's dividend": even when deepfakes are detected and debunked, they leave residual doubt. The mere existence of deepfake technology allows real authentic content to be dismissed as fabricated, eroding epistemic trust across the information ecosystem. As a European Digital Media Observatory study found, over 40% of EU respondents had encountered AI-generated media in the previous six months, often without recognizing it as such.

A Regulatory Model or Work in Progress?

The European Union's multi-layered approach to deepfake regulation — combining the AI Act's transparency mandates, the DSA's platform accountability, the GDPR's data protection baseline, and emerging frameworks around copyright and personality rights — represents the most ambitious attempt by any jurisdiction to address the synthetic media challenge comprehensively. The framework is grounded in a rights-based philosophy that seeks to preserve freedom of expression and artistic creativity while safeguarding democratic integrity and individual dignity.

Yet the coming period will decisively test whether this regulatory architecture can function not only as a model for effective and well-founded regulation but as a living, adaptive system capable of responding to a technology that doubles in volume every six months and advances qualitatively with each generation of AI models. The experiences of Slovakia, Romania, Germany, Poland, and the European Parliament elections collectively demonstrate that the threat is neither hypothetical nor distant. It is here, it is accelerating, and the democratic stakes could not be higher.

The meaningful line of defense, as computer scientist Siwei Lyu of the University at Buffalo has argued, will ultimately shift away from human judgment toward infrastructure-level protections: secure provenance through cryptographically signed media, AI content tools using the Coalition for Content Provenance and Authenticity (C2PA) specifications, and multimodal forensic detection systems. Simply looking harder at pixels — or at legal texts — will no longer be adequate. Europe's challenge is to ensure that its regulatory ambition keeps pace with the technological reality it seeks to govern, and that the democratic values it defends remain resilient in an era where reality itself has become contestable.

More to read