World

Meta urged to tighten AI content rules amid fake conflict videos

Navigation

Ask Onix

Oversight Board criticizes Meta over AI-generated misinformation

Meta's independent advisory body has called on the company to strengthen its policies for labeling AI-generated content, following a decision to leave an unverified video about the Israel-Iran conflict online without disclaimers.

Video controversy sparks review

The Oversight Board, a 21-member group established by Meta in 2020, rebuked the company for failing to label a fabricated AI video that falsely depicted widespread damage in Haifa, Israel, allegedly caused by Iranian forces. The board argued that the video's unchecked spread undermined public trust in online information, particularly during military conflicts.

Meta initially defended its decision, stating the video did not meet the threshold for removal or labeling because it posed no "imminent physical harm." However, the board dismissed this justification as too narrow, ruling that the content should have carried a "high risk AI label."

Systemic gaps in AI moderation

The board's report highlighted flaws in Meta's current approach, which relies heavily on user self-disclosure or complaints to flag AI-generated material. This reactive system, the advisors warned, is ill-equipped to handle the rapid spread of synthetic content during crises, where engagement spikes and misinformation risks escalate.

"Meta's methods are neither robust nor comprehensive enough to address the scale of AI-generated content," the board stated, urging the company to proactively label such material "much more frequently."

Case details and public impact

The video in question was posted in June 2025 by a Philippines-based Facebook account posing as a news outlet. It amassed nearly 1 million views before Meta intervened-only after the Oversight Board took up the case. The clip was part of a wave of AI-generated propaganda related to the Israel-Iran conflict, which collectively garnered over 100 million views, according to a BBC analysis.

Despite multiple user reports, Meta initially took no action, arguing the content did not directly incite violence. The board rejected this stance, emphasizing that deceptive AI content in conflict zones warrants stricter scrutiny.

Meta's response and future steps

In a statement, Meta pledged to label the Haifa video within seven days but stopped short of broader policy changes. The company said it would apply the board's recommendations only to "identical" content in "the same context," raising doubts about its commitment to systemic reform.

The Oversight Board, which frequently clashes with Meta over content moderation, has repeatedly questioned the company's influence. While Meta has loosened its enforcement policies in recent years, the board's latest critique underscores persistent tensions over accountability.

"Meta must do more to address the proliferation of deceptive AI-generated content on its platforms... so that users can distinguish between what is real and fake."

Meta Oversight Board

Broader implications

The dispute reflects growing concerns about AI's role in spreading misinformation during geopolitical crises. Experts warn that without proactive labeling and stricter oversight, platforms risk eroding public trust in digital media.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed