World

Meta removes under-16s from platforms ahead of Australia's social media ban

Navigation

Ask Onix

Meta begins removing underage users before new law takes effect

Meta has started deactivating accounts belonging to Australian users under 16 on Instagram, Facebook, and Threads, six days before a nationwide ban on teen social media use comes into force. The move affects an estimated 500,000 accounts across the three platforms.

Scope of the ban and compliance challenges

Australia's groundbreaking legislation, set to begin on 10 December, requires companies to take "reasonable steps" to prevent users under 16 from accessing their platforms. Firms that fail to comply face fines of up to A$49.5 million (US$33 million). Alongside Meta's services, the ban applies to YouTube, X, TikTok, Snapchat, Reddit, Kick, and Twitch.

A Meta spokesperson told The Meta Times that adherence to the new rules would be an "ongoing and multi-layered process." While the company has pledged to comply, it argued that a more effective solution would involve app stores verifying user ages during downloads and requiring parental consent for under-16s. This approach, Meta said, would eliminate the need for teens to repeatedly prove their age across different platforms.

User options and verification process

Meta has informed affected users that they can download and save their posts, videos, and messages before their accounts are deactivated. Teens who believe they have been incorrectly identified as under 16 can request a review. To verify their age, they may submit a "video selfie" or provide government-issued identification, such as a driver's licence.

Government defends ban amid criticism

Communications Minister Anika Wells acknowledged that the rollout may face "teething problems" but framed the ban as a necessary measure to protect Generation Alpha-children under 15-from the harms of social media. Wells described the platforms' algorithms as "behavioural cocaine" and warned that young users were being hooked on a "dopamine drip" from the moment they received a smartphone.

Critics, however, argue that the ban could push children toward less-regulated corners of the internet or isolate vulnerable groups who rely on social media for connection. Some platforms, like YouTube, have called the law "rushed," contending that banning accounts with parental controls in place would make their services "less safe."

Monitoring alternative platforms

Wells said her office is closely watching emerging apps like Lemon8 and Yope, which were not initially included in the ban, to see if underage users migrate to them. Earlier this week, Australia's eSafety Commissioner, Julie Inman Grant, asked both platforms to assess whether they fall under the new regulations.

Yope's co-founder, Bahram Ismailau, told The Meta Times that the startup had not received any official inquiries but had conducted its own review. He stated that Yope functions as a private messenger, similar to WhatsApp, with no public content, and therefore does not qualify as a social media platform. Lemon8, meanwhile, has announced plans to exclude under-16s from its platform starting next week, despite not being subject to the ban.

Global implications and research findings

The policy is being closely observed by governments worldwide. A government-commissioned study released earlier this year found that 96% of Australian children aged 10-15 used social media. Of those, seven in 10 had encountered harmful content, including misogynistic material, violence, and posts promoting eating disorders or self-harm. One in seven reported experiencing grooming behaviour from adults or older children, while more than half said they had been cyberbullied.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed