Society

How to spot AI-generated videos before they fool you completely

Navigation

Ask Onix

AI video fakes are getting harder to detect-but low quality remains a red flag

Artificial intelligence-generated videos have advanced so rapidly in the past six months that distinguishing them from real footage is becoming nearly impossible. Yet for now, one telltale sign persists: poor visual quality. Grainy, blurry, or pixelated clips should raise suspicion, experts warn, as AI tools often exploit low resolution to mask imperfections.

Why blurry videos are more likely to be AI

While high-quality AI videos exist, lower-resolution clips are currently more effective at deceiving viewers, according to Hany Farid, a digital forensics professor at the University of California, Berkeley, and founder of deepfake detection firm GetReal Security. "The leading text-to-video generators like [Google's] Veo and OpenAI's Sora still produce small inconsistencies," Farid explains, noting that flaws such as unnatural skin textures, shifting hair patterns, or erratic background movements become harder to spot in degraded footage.

This tactic isn't accidental. Farid reveals that bad actors deliberately downgrade video quality to obscure AI artifacts: "If I'm trying to fool people, I generate my fake video, reduce the resolution, and add compression to obfuscate any possible errors." The strategy has already succeeded in viral hoaxes, from a security-camera-style clip of rabbits on a trampoline (240 million TikTok views) to a pixelated "subway romance" video and a zoomed-in fake sermon by a conservative priest denouncing billionaires.

Three key warning signs

Farid identifies three hallmarks of AI videos:

  1. Length: Most AI clips are under 10 seconds-far shorter than typical social media videos-due to high generation costs and error risks over time.
  2. Resolution: Low pixel density (e.g., 480p or worse) hides visual glitches.
  3. Compression: Heavy compression introduces blocky artifacts that camouflage unnatural movements.

The clock is ticking on visual cues

Experts agree these red flags won't last. Matthew Stamm, head of Drexel University's Multimedia and Information Security Lab, predicts that "obvious visual cues will vanish within two years," mirroring the rapid improvement in AI-generated images. "You just can't trust your eyes anymore," he cautions.

Behind the scenes, researchers are developing forensic tools to detect statistical anomalies in pixel distribution-"fingerprints" invisible to the human eye. Meanwhile, tech companies are exploring embedded metadata standards to verify authentic media. Yet these solutions face an arms race: as detection improves, so do the fakes.

The real fix? Rethinking trust

Mike Caulfield, a digital literacy expert, argues that focusing on visual clues is a losing battle. "Provenance-not surface features-will be key," he says. "We must treat videos like text: unverified until we investigate the source, context, and poster's credibility."

"If I can be a little grandiose, I think this is the greatest information security challenge of the 21st Century. But the field is young, and solutions are emerging."

Matthew Stamm, Drexel University

For now, Farid advises skepticism toward short, low-quality clips-especially those lacking clear sourcing. "The best defense," he says, "is to assume nothing is real until proven otherwise."

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed