Ask Onix
Professor's likeness hijacked in synthetic video rant
King's College London theatre professor Alan Read was scrolling social media when an obscure account tagged him in a video that used his face and a voice nearly indistinguishable from his own. The clip featured a tirade against French President Emmanuel Macron, calling Western leaders passengers on a "Titanic" labeled the European Union.
"Almost everything in that video is egregious and deeply unsettling," Read, who has no political affiliations, told the BBC. "It felt completely foreign to me."
AI-generated influence campaigns escalate
Read's experience reflects a broader trend: a surge in Russia-linked synthetic videos flooding platforms in recent months. Security analysts warn the West is unprepared for the Kremlin's evolving use of artificial intelligence to shape public opinion.
"We're witnessing a revolution in political influence. These systems can generate persuasion at scale for pennies, and our governance structures aren't equipped to handle it."
Chris Kremidas-Courtney, European Policy Centre
Disinformation targets EU, Ukraine amid funding debates
The videos, some amassing hundreds of thousands of views, spread narratives discrediting EU institutions and accusing Ukrainian leadership of corruption. The timing coincides with Kyiv's efforts to secure Western funding as the war with Russia enters its fifth year.
The uptick follows the release of OpenAI's Sora2, a video-generating tool that significantly improved realism. Competitors, racing to capture market share, have slashed prices or removed safeguards like watermarks, making it easier to produce undetectable fakes.
"Second-tier apps will let you create videos of specific people, even if OpenAI tries to block it."
Arman Tuganbaev, Russian AI expert
OpenAI stated it takes action against accounts engaged in harmful deceptive activities, including misrepresenting content origins.
Tactics evolve beyond traditional propaganda
In December, AI-generated videos of young Polish women advocating for "Polexit"-Poland's withdrawal from the EU-went viral on TikTok. Poland's government spokesman, Adam Szlapka, attributed the clips to Russian disinformation, noting telltale Russian syntax.
TikTok removed the videos and associated accounts, reporting over 75 covert influence operations dismantled globally in 2025. Meanwhile, UK officials warn similar deepfakes could disrupt local elections in May.
"We've seen them used in elections worldwide. Britain won't be an exception."
Vijay Rangarajan, UK Electoral Commission
Britain's Online Safety Act lacks explicit provisions for disinformation, requiring platforms to remove only proven foreign influence-often too late to prevent viral spread.
Networks exploit plausible deniability
Western researchers identify common traits in the videos, linking them to Kremlin-aligned disinformation units. One campaign, dubbed Matryoshka (after Russian nesting dolls), layered false claims about Moldovan President Maia Sandu during her 2025 election campaign.
Unlike overt propaganda outlets like RT or Sputnik-sanctioned after Russia's invasion of Ukraine-these operations use hacked or dormant accounts to obscure their origins, complicating countermeasures.
"They offer plausible deniability, making it harder to counter their influence."
Sophie Williams-Dunning, Royal United Services Institute
Clemson University researchers traced another network, Storm-1516, to veterans of Yevgeny Prigozhin's Wagner-linked troll factory. Their study found that false narratives about Ukrainian President Volodymyr Zelensky's corruption dominated 7.5% of X discussions about him within a week of deployment.
"That's a success rate any marketing firm would envy."
Darren L. Linvill, Clemson University