
OpenAI announced that it will automatically apply additional safety devices to accounts estimated to be under 18 years old by predicting the age of ChatGPT users.
OpenAI said on its blog, “We will launch an age prediction function in the ChatGPT consumer plan to determine whether the account may belong to users under the age of 18 and apply experiences and safeguards suitable for teenagers.”
“Adolescents who inform them that they are under the age of 18 when signing up automatically receive additional safeguards to reduce exposure to sensitive or potentially harmful content,” OpenAI said. “This will allow adult accounts to use the service more freely within the scope of safety.”
If the age prediction model determines the account to be under 18 years old, ChatGPT automatically applies protective measures to reduce exposure to sensitive content.

The targets include blatant violence, bloodshed, and cruelty content, viral challenges that can encourage dangerous or harmful behavior to minors, sexual, romantic, and violent role plays, self-harm portrayals, encouraging extreme beauty standards, unhealthy diets, and content that encourages appearance degradation.
If you are unsure about your age or if the information is incomplete, it is defaulted to a safer experience, OpenAI said.
OpenAI said it also provides parental controls, apart from protective devices. It includes setting “quiet hours” that restrict access to ChatGPT, controlling functions related to memory or model learning, and receiving notifications when detecting signs of acute stress.
Age prediction is made by combining account and behavioral signals. OpenAI explained that age estimates are made by combining the elapsed period after account creation, the time period of activities, the change in usage patterns, and the age entered by the user.
JENNIFER KIM
US ASIA JOURNAL



