Ask Onix
Updated 26 February - Meta expands parental alerts to cover repeated searches for suicide and self-harm content on Instagram.
New parental alerts roll out globally
Parents using Instagram's supervision tools will soon receive notifications if their teenager repeatedly searches for terms related to suicide or self-harm. The feature, announced by Meta, marks the first time the platform will proactively inform parents about such searches, rather than simply blocking the content or directing users to external support resources.
Starting next week, families in the UK, US, Australia, and Canada enrolled in Instagram's Teen Accounts program will begin receiving these alerts. A global rollout is expected to follow in the coming months.
Criticism from suicide prevention groups
The move has drawn sharp criticism from suicide prevention charities, including the Molly Rose Foundation, which warned the alerts could exacerbate harm rather than prevent it.
"This clumsy announcement is fraught with risk. We are concerned that forced disclosures could do more harm than good."
Andy Burrows, Chief Executive, Molly Rose Foundation
The foundation was established by the family of Molly Russell, a 14-year-old who died by suicide in 2017 after viewing self-harm and suicide-related content on Instagram and other platforms. Burrows argued that while parents naturally want to know if their child is struggling, the notifications may leave them "panicked and ill-prepared" for the difficult conversations that follow.
Ian Russell, Molly's father and founder of the foundation, expressed skepticism about the effectiveness of the alerts. Speaking to the BBC, he questioned the impact of receiving such a distressing message during a workday.
"Imagine being a parent of a teenager and getting a message at work saying 'your child is thinking of ending their life.' I don't know how I'd react. Even if Meta says they'll provide support, in that moment of panic, I don't think this is a sensible approach."
Ian Russell, Molly Rose Foundation
Charities demand systemic changes
Several organizations, including the Molly Rose Foundation, have interpreted Meta's announcement as an admission that more could be done to protect young users on Instagram. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, acknowledged the step but criticized Meta for failing to address the root issue.
"Meta is neglecting the real problem: children and young people continue to be drawn into a dark and dangerous online world. Parents contact us daily, expressing their fears about their children's online activity. They don't want to be warned after the fact-they don't want harmful content to reach their children in the first place."
Ged Flynn, Papyrus Prevention of Young Suicide
Leanda Barrington-Leach, executive director at children's charity 5Rights, echoed these concerns, stating that if Meta were serious about child safety, it would redesign its systems to be "age-appropriate by design and default."
Burrows also referenced research conducted by the Molly Rose Foundation last September, which suggested Instagram continues to recommend harmful content about depression, suicide, and self-harm to vulnerable young users. Meta disputed these findings, calling them a "misrepresentation" of its efforts to empower parents and protect teens.
How the alerts will work
The new alerts are part of Instagram's existing teen protections, which include hiding suicide and self-harm-related content and blocking searches for harmful material. Parents will be notified if their child repeatedly searches for such terms within a short period. The alerts will be sent via email, text, WhatsApp, or the Instagram app, depending on the contact information Meta has on file.
Meta acknowledged that the system may occasionally send alerts even when there is no cause for concern, stating it will "err on the side of caution." The company also plans to extend similar alerts to interactions with its AI chatbot in the coming months, as young users increasingly turn to AI for support.
Expert reactions and next steps
Sameer Hinduja, co-director of the Cyberbullying Research Center, described the alerts as "obviously alarming" for any parent but emphasized the importance of the resources provided alongside them.
"What matters is not just the alert itself but the quality and usefulness of the resources parents immediately receive to guide them through what to do next. You can't drop a notification on a parent and leave them on their own, and it seems like Meta understands that."
Sameer Hinduja, Cyberbullying Research Center
The rollout comes as social media companies face mounting pressure from governments worldwide to enhance child safety measures. Earlier this year, Australia banned social media for users under 16, with Spain, France, and the UK considering similar legislation. Regulators are also scrutinizing big tech's business practices, particularly their treatment of young users.
Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri recently testified in a US court to defend the company against allegations of targeting younger users.
Support resources
For those affected by the issues discussed in this article, support is available via BBC Action Line.