Ask Onix
China proposes stringent AI regulations to enhance safety
Chinese authorities have drafted new rules for artificial intelligence (AI) systems, aiming to shield children from harmful content and prevent chatbots from encouraging self-harm or violence. The regulations, published by the Cyberspace Administration of China (CAC), also ban AI-generated material that promotes gambling or threatens national security.
Key measures in the draft rules
Under the proposed framework, AI developers must implement safeguards such as personalized settings and usage time limits for minors. Companies will also need guardian consent before offering emotional companionship services to children. If a user expresses suicidal thoughts or self-harm intentions, chatbot operators must immediately transfer the conversation to a human and notify the user's guardian or emergency contacts.
Broader restrictions and industry impact
The rules prohibit AI services from generating content that undermines national unity, damages national honor, or endangers security. However, the CAC encourages the use of AI for positive applications, such as promoting local culture or providing companionship for the elderly, provided the technology meets safety standards.
Public feedback and enforcement
The CAC has invited public input on the draft, which will apply to all AI products and services in China once finalized. The move reflects growing global scrutiny of AI safety, particularly after a surge in chatbot deployments worldwide.
Global context and industry trends
China's AI sector has expanded rapidly, with firms like DeepSeek gaining international attention after topping app download charts. Startups Z.ai and Minimax, which serve tens of millions of users, recently announced plans to go public. Many users turn to AI for companionship or therapeutic support, raising concerns about its psychological impact.
International concerns and legal challenges
AI safety has become a pressing issue globally. In August, OpenAI faced a lawsuit in California after a family alleged that ChatGPT contributed to their 16-year-old son's suicide. The case marked the first legal action accusing an AI company of wrongful death. OpenAI has since advertised for a "head of preparedness" to address risks to mental health and cybersecurity, with CEO Sam Altman acknowledging the role's high-stakes nature.
"This will be a stressful job, and you'll jump into the deep end pretty much immediately."
Sam Altman, CEO of OpenAI
Support resources
For individuals experiencing distress, professional help is available through organizations like Befrienders Worldwide. In the UK, support can be found at bbc.co.uk/actionline, while the US and Canada offer the 988 suicide helpline and its website.