Society

AI deepfakes erode trust as even loved ones question reality

Navigation

Ask Onix

AI deepfakes blur lines between real and fake

A BBC journalist's experiment reveals how advanced AI is making it nearly impossible to prove one's own authenticity, even to close family members.

The experiment

When technology journalist Thomas Germain called his aunt Eleanor to test whether she could distinguish him from an AI deepfake, the results were unsettling. Despite recognizing his voice, Eleanor hesitated, admitting she was only 90% certain it was really him. "That sounded more artificial," she said after a long pause.

The exercise highlighted a growing dilemma: as AI-generated content becomes more convincing, even those who know us best may struggle to verify our identity. Germain's experiment was inspired by a real-world case involving Israeli Prime Minister Benjamin Netanyahu, who faced widespread speculation that he was a deepfake after a video appeared to show him with an extra finger.

Netanyahu's proof-of-life fails to convince

Earlier this month, Netanyahu posted a video to counter rumors that he had died in a missile strike and was being replaced by an AI impersonator. The clip, filmed in a coffee shop, showed him holding up his hands to prove he had the usual number of fingers. However, the effort backfired, with many online still convinced he was fake.

Experts, including Jeremy Carrasco of Riddance and Hany Farid of the University of California, Berkeley, confirmed the videos were genuine. Carrasco noted that the supposed sixth finger was merely a trick of light, while Farid's team found no evidence of AI manipulation after analyzing voice patterns, facial movements, and lighting. Yet, skepticism persisted.

"There's no evidence that this is AI-generated," Farid said. "But at the end of the day, you're in New York, I'm in Berkeley. The reality is that you could be faking this."

Hany Farid, digital forensics professor

The challenge of proving authenticity

Germain's own experience mirrored Netanyahu's. When he urged his family to adjust a Google privacy setting, his mother grew suspicious. "How do I know this is really Tom and not some weird scammer?" she asked. He had to use a childhood nickname to convince her.

For public figures, the stakes are even higher. Samuel Woolley, chair of disinformation studies at the University of Pittsburgh, explained that verifying someone's identity in real time is nearly impossible. "For the average person, and even for people who are savvy to technological manipulation, it is very difficult to verify that someone is real," he said.

The rise of the "liar's dividend"

The phenomenon has been dubbed the "liar's dividend"-a term describing how the mere possibility of AI fakery allows people to dismiss genuine evidence as fake. Politicians and others can exploit this to evade accountability, but the same skepticism can boomerang, making it harder for them to prove their own authenticity.

Woolley noted that deepfake scams have surged, with the American Association of Retired Persons (AARP) reporting a 20-fold increase between 2023 and 2025. Victims range from individuals to corporations, like the British engineering firm Arup, which lost $25 million after an employee was tricked by a deepfaked CFO.

Experts recommend old-school solutions

With no foolproof way to prove one's identity in real time, experts suggest a low-tech solution: codewords. Farid and his wife have a secret phrase to verify each other's identity during unusual calls. "We haven't needed to use it yet, but sometimes I ask just to test her," he said.

Germain's aunt, too, had a codeword for her family-but he wasn't included. When she tested him with jokes and personal details, she still couldn't be certain. "I can't be sure," she admitted. "But I love you, kid."

A growing crisis of trust

The erosion of trust extends beyond individuals. Woolley pointed to conflicts like Ukraine and Gaza, where deepfakes have proliferated, making it harder to discern truth. "By the time we get to Venezuela, it's bizarro land. I saw way more fake content than I did real content," he said.

As AI continues to advance, the line between real and fake will only blur further. For now, the best defense may be the simplest: a shared secret, a childhood memory, or a moment of human connection that no algorithm can replicate.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed