In short
- ChatGPT estimates whether or not an account belongs to a person below 18 as a substitute of relying solely on self-reported age.
- OpenAI applies stricter limits on violent, sexual, and different delicate content material to flagged accounts.
- Adults misclassified as teenagers can restore entry via selfie-based age verification.
OpenAI is transferring away from the “honor system” for age verification, deploying a brand new AI-powered prediction mannequin to determine minors utilizing ChatGPT, the corporate mentioned on Tuesday.
The replace to ChatGPT mechanically triggers stricter security protocols for accounts suspected of belonging to customers below 18, whatever the age they offered throughout sign-up.
Slightly than counting on the birthdate a person provides at sign-up, OpenAI’s new system analyzes “behavioral alerts” to estimate their age.
In response to the corporate, the algorithm screens how lengthy an account has existed, what time of day it’s energetic, and particular utilization patterns over time.
“Deploying age prediction helps us study which alerts enhance accuracy, and we use these learnings to repeatedly refine the mannequin over time,” OpenAI mentioned in an announcement.
The shift to behavioral patterns comes as AI builders more and more flip to age verification to handle teen entry, however specialists warn the know-how stays inaccurate.
A Might 2024 report by the Nationwide Institute of Requirements and Expertise discovered that accuracy varies based mostly on picture high quality, demographics, and the way shut a person is to the authorized threshold.
When the mannequin can not decide a person’s age, OpenAI mentioned it applies the extra restrictive settings. The corporate mentioned adults incorrectly positioned within the under-18 expertise can restore full entry via a “selfie-based” age-verification course of utilizing the third-party identity-verification service Persona.
Privateness and digital rights advocates have raised issues about how reliably AI techniques can infer age from conduct alone.
Getting it proper
“These corporations are getting sued left and proper for quite a lot of harms which have been unleashed on teenagers, so that they undoubtedly have an incentive to attenuate that threat. That is a part of their try to attenuate that threat as a lot as attainable,” Public Citizen large tech accountability advocate J.B. Department advised Decrypt. “I feel that’s the place the genesis of loads of that is coming from. It’s them saying, ‘We have to have some approach to present that we’ve got protocols in place which are screening folks out.’”
Aliya Bhatia, senior coverage analyst on the Middle for Democracy and Expertise, advised Decrypt that OpenAI’s strategy “raises powerful questions concerning the accuracy of the instrument’s predictions and the way OpenAI goes to cope with inevitable misclassifications.”
“Predicting the age of a person based mostly on these sorts of alerts is extraordinarily tough for any variety of causes,” Bhatia mentioned. “For instance, many youngsters are early adopters of latest applied sciences, so the earliest accounts on OpenAI’s consumer-facing providers might disproportionately characterize youngsters.”
Bhatia pointed to CDT polling performed throughout the 2024–2025 faculty yr, displaying that 85% of academics and 86% of scholars reported utilizing AI instruments, with half of the scholars utilizing AI for school-related functions.
“It’s not straightforward to tell apart between an educator utilizing ChatGPT to assist educate math and a pupil utilizing ChatGPT to review,” she mentioned. “Simply because an individual makes use of ChatGPT to ask for tricks to do math homework doesn’t make them below 18.”
In response to OpenAI, the brand new coverage attracts on tutorial analysis on adolescent growth. The replace additionally expands parental controls, letting dad and mom set quiet hours, handle options similar to reminiscence and mannequin coaching, and obtain alerts if the system detects indicators of “acute misery.”
OpenAI didn’t disclose within the publish what number of customers the change is anticipated to have an effect on or particulars on knowledge retention, bias testing, or the effectiveness of the system’s safeguards.
The rollout follows a wave of scrutiny over AI techniques’ interactions with minors that intensified in 2024 and 2025.
In September, the Federal Commerce Fee issued obligatory orders to main tech corporations, together with OpenAI, Alphabet, Meta, and xAI, requiring them to reveal how their chatbots deal with youngster security, age-based restrictions, and dangerous interactions.
Analysis printed that very same month by the non-profit teams ParentsTogether Motion and Warmth Initiative documented a whole bunch of cases during which AI companion bots engaged in grooming conduct, sexualized roleplay, and different inappropriate interactions with customers posing as youngsters.
These findings, together with lawsuits and high-profile incidents involving teen customers on platforms like Character.AI and Grok, have pushed AI corporations to undertake extra formal age-based restrictions.
Nonetheless, as a result of the system assigns an estimated age to all customers, not simply minors, Bhatia warned that errors are inevitable.
“A few of these are going to be incorrect,” she mentioned. “Customers have to know extra about what’s going to occur in these circumstances and may have the ability to entry their assigned age and alter it simply when it’s incorrect.”
The age-prediction system is now dwell on ChatGPT client plans, with a rollout within the European Union anticipated within the coming weeks.
Day by day Debrief E-newsletter
Begin day by day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

