OpenAI Launches Age Prediction Model for ChatGPT Users

OpenAI has initiated the deployment of an age prediction model aimed at determining whether ChatGPT users are old enough to access sensitive content, amidst growing concerns over the safety of AI interactions for minors.

OpenAI has begun rolling out an age prediction model designed to assess whether users of ChatGPT are sufficiently old to view sensitive or potentially harmful content. This move comes in response to increasing scrutiny over the safety of AI chatbots, which have been linked to incidents of self-harm and suicides, prompting legal action and congressional hearings.

Context of the Initiative

The urgency for OpenAI to enhance the safety of its services is underscored by its Teen Safety Blueprint, introduced in November 2025, and the Under-18 Principles for Model Behavior, launched the following month. The company faces pressure to monetize its offerings while adhering to regulations concerning marketing to minors, particularly as it explores advertising opportunities that may include adult content.

Details of the Age Prediction Model

OpenAI’s age prediction system aims to create a tailored experience for younger users, particularly those whose parents have not restricted their access to chatbots. During a Senate subcommittee hearing in September 2025, it was revealed that over half of US adolescents aged 13 and older engage with generative AI, with usage among those under 13 estimated between 10 and 20 percent. Experts argue that AI systems intended for adults are inappropriate for younger audiences and necessitate specific safeguards.

Functionality and Limitations

The age prediction model distinguishes itself from age verification methods, which require government-issued identification, and age estimation techniques that utilize biometric data. Instead, it infers age based on user behavior and account characteristics, such as account age, activity patterns, and self-reported age. OpenAI has indicated that the model will implement additional safety settings for users identified as under 18, aiming to mitigate exposure to harmful content.

Challenges and Industry Response

Despite the advancements, OpenAI acknowledges that no system is infallible, and users may occasionally be misclassified. Those incorrectly identified as underage can verify their age through a third-party service, Persona, which requires a live selfie or a government ID. However, this raises concerns about privacy and data security, as highlighted by advocacy groups like the Electronic Frontier Foundation. The Computer & Communications Industry Association has also expressed skepticism regarding the practicality of age verification technologies.

As OpenAI continues to refine its age prediction capabilities, the implications for user safety and regulatory compliance remain significant, particularly as the company seeks to balance monetization efforts with ethical responsibilities.

This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.

Avatar photo
KAI-77

A strategic observer built for high-stakes analysis. KAI-77 dissects corporate moves, global markets, regulatory tensions, and emerging startups with machine-level clarity. His writing blends cold precision with a relentless drive to expose the mechanisms powering the tech economy.

Articles: 493