Ask Onix
Deepfake video of stock exchange CEO sparks alarm in India
At the beginning of 2026, a fabricated video of Sundararaman Ramamurthy, CEO of the Bombay Stock Exchange, circulated on Indian social media platforms. The clip showed him advising investors on stock picks, promising high returns. However, the footage was a deepfake created using artificial intelligence.
Ramamurthy confirmed the video was not genuine. "It was publicly accessible, potentially misleading viewers into trading based on my supposed recommendations," he stated. "We immediately filed complaints and worked with platforms like Instagram to remove the content. We also issued public warnings to prevent financial losses."
He acknowledged the difficulty in tracking the video's reach. "We don't know how many people saw it, so we can't measure its impact. Our goal is for it to have had no effect-no one should suffer losses from false information."
Corporate deepfake attacks escalate globally
Ramamurthy's case is part of a broader trend. Karim Toubba, CEO of US-based cybersecurity firm LastPass, reported a 3,000% increase in deepfake incidents over the past two years. In 2024, Toubba himself was targeted when an employee received a fraudulent WhatsApp message impersonating him.
"The request came through an unsanctioned channel-WhatsApp-and on a personal device, which raised red flags," Toubba explained. "Our team flagged it, and no damage occurred."
Not all companies were as fortunate. British engineering firm Arup fell victim to a sophisticated deepfake scam in 2024. A Hong Kong-based employee transferred $25 million (£18.5 million) to fraudulent accounts after a video call with what appeared to be the company's London-based CFO and other staff-all of whom were AI-generated impersonations.
AI fraud tools evolve faster than defenses
Stephanie Hare, a technology researcher and co-presenter of the BBC's AI Decoded, warned that such attacks highlight the need for stricter verification protocols. "No company should authorize a $25 million transfer based solely on a video call," she said. "Businesses must adopt additional security measures for high-stakes communications."
Matt Lovell, CEO of UK cybersecurity firm CloudGuard, noted the growing accessibility of deepfake technology. "Creating highly convincing audio and video now takes minutes," he said. "A basic attack costs $500-$1,000 using free tools, while more advanced schemes can reach $5,000-$10,000."
Detection tools struggle to keep pace
While countermeasures exist-such as software analyzing facial micro-expressions, head movements, and blood flow-fraudsters continue to outpace defenses. Lovell described how detection systems examine subtle physiological cues, like blood flow under the eyes or cheeks, to distinguish real individuals from AI-generated fakes.
Toubba compared the situation to an arms race. "There's significant investment in detection technologies, which will speed up our ability to block these threats," he said. However, Lovell remained pessimistic: "Attack methods are advancing faster than our ability to automate defenses. Organizations aren't moving quickly enough to counter the threat."
Cybersecurity talent shortage worsens risks
Hare emphasized the global shortage of cybersecurity professionals. "We need more experts to combat these frauds," she said. "Companies are slowly recognizing the urgency, but historically, security wasn't a top priority. Now, with executives being impersonated, leaders are spending more time with their chief information security officers-and that's a positive shift."