Ask Onix
Whistleblowers expose algorithm-driven harm on social platforms
Internal documents and insider accounts reveal how Meta and TikTok amplified harmful content to boost user engagement, sidelining safety measures amid fierce competition with rivals like TikTok.
Competition over safety: The algorithm arms race
More than a dozen whistleblowers and former employees told the BBC that social media giants knowingly relaxed safeguards to keep users engaged, even as internal research flagged risks tied to violence, sexual exploitation, and extremism. The revelations come from the BBC documentary Inside the Rage Machine, which examines how platforms responded to TikTok's explosive growth.
A Meta engineer, who requested anonymity, said senior leadership instructed teams to allow more "borderline" content-including misogyny and conspiracy theories-to compete with TikTok. "They told us the stock price was down," the engineer said, framing the decision as a financial imperative.
TikTok's internal conflicts: Politics vs. child safety
A TikTok trust and safety employee, identified as "Nick," shared rare access to the company's internal dashboards, showing how cases involving politicians were prioritized over reports of harm to minors. One example highlighted a trivial complaint about a political figure-mocked with a chicken comparison-being fast-tracked ahead of a 17-year-old's cyberbullying report in France and a 16-year-old's sexual blackmail case in Iraq.
"The urgency wasn't high," Nick said of the Iraq case, despite its severity. When safety teams pushed to prioritize child protection, they were overruled, with management citing the need to maintain "strong relationships" with governments to avoid regulatory threats. Nick's advice to parents: "Delete [TikTok]. Keep kids away as long as possible."
"If you're feeling guilty daily because of what you're instructed to do, you ask: Should I speak up?"
"Nick," TikTok trust and safety team member
Meta's Reels: A case study in trade-offs
Meta's 2020 launch of Instagram Reels-a direct response to TikTok-exposed the tension between growth and safety. Matt Motyl, a former senior researcher at Meta, shared internal documents showing Reels comments had 75% more bullying, 19% more hate speech, and 7% more violence than the main Instagram feed. Despite this, safety teams were denied additional staff while Reels expanded with 700 new hires.
Motyl described a "power imbalance" where Reels teams blocked safety features to preserve engagement. "Toxic content gets more clicks," he said, adding that Meta's algorithms rewarded outrage. One internal study warned the platform was "feeding users fast-food" at the expense of well-being, with financial incentives misaligned with its mission to "bring the world closer together."
Algorithms as "black boxes"
Ruofan Ding, a former TikTok machine-learning engineer, compared the platform's recommendation system to a car: "We build the engine; we trust the brakes team to do their job." But as TikTok refined its algorithm weekly to gain market share, Ding noticed more "borderline" content-legal but harmful material like misogyny and conspiracy theories-slipping through after prolonged user sessions.
Teenagers interviewed by the BBC said reporting tools failed to stop recommendations of violent or hateful content. Calum, 19, described being "radicalized by the algorithm" at 14, with content fueling his anger and reinforcing racist and misogynistic views. "It reflected how I felt internally-that I was angry at everyone," he said.
Industry-wide normalization of harmful content
UK counter-terrorism police reported a rise in "normalized" extremist content, including antisemitic, racist, and far-right posts. "People are desensitized to real-world violence," an officer said, noting users now openly share harmful views. Whistleblowers echoed this, with Nick revealing TikTok's internal dashboards showed increasing cases of terrorism, sexual violence, and trafficking-despite public claims of robust moderation.
Brandon Silverman, former CEO of Crowdtangle (acquired by Meta), said Mark Zuckerberg's "paranoia" about competition led to safety teams being sidelined. "When he feels threatened, no amount of money is too much for growth," Silverman said, recalling Meta's refusal to approve even small safety hires while pouring resources into Reels.
Company responses: Denials and defenses
Meta denied amplifying harmful content for profit, calling the claims "wrong." A spokesperson highlighted investments in teen safety, including a new "Teen Accounts" feature with parental controls. TikTok rejected the idea that political content is prioritized over child safety, calling the allegations "fabricated." The company cited "50+ preset safety features" for teens and AI-driven content filters, though Nick's dashboard access contradicted these claims.
Both companies emphasized their commitment to user safety, but whistleblowers argued public statements diverge sharply from internal realities. As one former Meta engineer put it: "They're not exposed to this content daily. If they were, they'd act differently."