ChatGPT Announces Parental Controls Following Accusations of Encouraging 16-Year-Old to Suicide

"Next month, parents will be able to link their account to that of their minor children" and "control how ChatGPT responds to their teens with model behavior rules," OpenAI announced Tuesday, adding that parents will also be able to be alerted if "acute distress" is detected in their children's conversations and control account settings.
“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” the company assured.
This announcement follows a blog post at the end of August in which the company indicated that it was preparing a parental control mechanism.
The day before publication, the parents of a 16-year-old California teenager who committed suicide filed a lawsuit against OpenAI, accusing ChatGPT of providing their son with detailed instructions to end his life and encouraging the act.
In addition to the parental controls announced this Tuesday, OpenAi also announced that it will take other measures over the next four months, such as redirecting certain "sensitive conversations" to more advanced reasoning models, such as GPT-5-thinking.

