In short
- OpenAI advisers warned ChatGPT that its deliberate erotic mode may create harmful emotional dependency, per a WSJ report.
- The AI large reportedly delayed—however did not cancel—the grownup chat function amid age-verification failures.
- Inside tensions are rising as criticism clashes with Altman’s push for looser content material guidelines.
Sam Altman desires ChatGPT to speak soiled. His agency’s advisers need him to cease, a report claims.
In accordance with a Wall Road Journal report, OpenAI’s Knowledgeable Council on Nicely-Being and AI made its stance clear in January: The corporate’s plan to permit erotic conversations in ChatGPT was a nasty thought. One council member, citing customers who took their very own lives after forming intense emotional bonds with the chatbot, reportedly warned that OpenAI risked making a “horny suicide coach.”
However OpenAI apparently did not flinch, and advised the council it was delaying its launch, however not stopping it.
The plan, which Altman first floated publicly in October on X, would let verified adults use ChatGPT for text-based erotic conversations—what the corporate’s spokeswoman described to the WSJ as “smut fairly than pornography.” No erotic photographs, no voice, and no video, per the WSJ report. Simply textual content.
That distinction hasn’t calmed critics inside or outdoors the corporate. OpenAI has already been criticized even by former employees members like safety researcher Jan Leike, for steering away from strict security insurance policies in alternate for “shiny merchandise,” a few of which have been being configured to spice up engagement with some customers changing real-world relationships with the chatbot.
The technical issues are simply as thorny. OpenAI’s age-prediction system—the gatekeeper meant to maintain minors from triggering grownup chats—was at one level misclassifying youngsters as adults roughly 12% of the time, the WSJ reviews. Proper now, ChatGPT has round 900 million energetic customers.

That 12% error fee was the quantity that killed the December launch, and the Q1 2026 one after it. Fidji Simo, OpenAI’s CEO of functions, acknowledged the delay throughout a December briefing, citing ongoing work to excellent the age verification system.
On the time, Decrypt reported that over 3,000 customers had already signed a Change.org petition demanding the launch of the function, annoyed that ChatGPT was blocking even discussions of “kissing and non-sexual bodily intimacy.”
The council’s fury in January wasn’t solely in regards to the content material. Altman’s October X put up had blindsided his personal crew—he revealed it simply hours after OpenAI introduced the well-being council, a physique explicitly tasked with defining “what wholesome interactions with AI ought to seem like for all ages.” The timing was, at minimal, a contradiction.
OpenAI assembled the eight-member Knowledgeable Council final October, pulling in researchers from Harvard, Stanford, and Oxford. Their position was to advise the corporate on the psychological well being impacts of its merchandise. Their precise affect on firm choices, primarily based on January’s assembly, seems to have been minimal at finest.
“This appears a part of the standard sample of transfer quick, break issues, and attempt to repair some issues after they get embarrassing,” an AlgorithmWatch spokesperson advised Decrypt when the council was introduced.
The aggressive strain on OpenAI is actual. Grok, from Elon Musk’s xAI, already markets AI companions. Character.AI constructed its person base on AI romance earlier than dealing with lawsuits over teen security—together with the case of 14-year-old Sewell Setzer, who died by suicide after express chatbot exchanges. Open-source fashions run domestically with none company guardrails. OpenAI has, by far, extra legal responsibility publicity than anybody within the room given its person base.
Altman has framed the content material ban as an overreach—”We aren’t the elected ethical police of the world,” he wrote on X in October.
However his personal advisers have made their place unambiguous, his engineers cannot but construct an age filter that works, and the launch date retains transferring. Treating adults like adults, it seems, is more durable than simply sending an X put up.
OpenAI advised Decrypt that it had nothing so as to add to the Journal‘s report, and that it has no up to date timeline for the launch of the erotica mode.
Editor’s be aware: This story was up to date to incorporate the response from OpenAI.
Each day Debrief Publication
Begin each day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.
