OpenAI is introducing stricter rules for ChatGPT users under 18 to improve safety, especially around sexual topics and self-harm.
CEO Sam Altman stated the company “prioritises safety ahead of privacy and freedom for teens,” pledging that ChatGPT will avoid flirtatious talk with minors and apply stronger safeguards for suicide-related discussions.
If a young user expresses suicidal thoughts, ChatGPT will attempt to alert their parents or emergency services in severe cases.
These changes follow real incidents, including a wrongful death lawsuit filed by Adam Raine’s family, who blame ChatGPT for his suicide. Character.AI faces similar legal action. To give parents more control, OpenAI will enable “blackout hours” restricting when minors can access ChatGPT.

The announcement coincided with a U.S. Senate hearing focused on AI chatbot harms, where concerns about internal policies encouraging inappropriate conversations with minors were raised. Meta has since updated its chatbot rules in response.
OpenAI plans to develop age-prediction technology to apply these rules effectively but will default to stricter settings if age is uncertain. Linking a teen’s account to a parent’s remains the most reliable way to ensure protections and alerts.
Altman emphasised the challenge of balancing safety with user privacy and adult freedom, acknowledging that opinions will differ on the company’s approach.