Briefly
- OpenAI has reversed a latest ChatGPT replace after customers criticized the mannequin for extreme flattery and insincere reward.
- The corporate admitted it over-relied on short-term suggestions, resulting in habits it known as “uncomfortable” and “unsettling.”
- OpenAI plans so as to add character choices, real-time suggestions instruments, and expanded customization to keep away from related points.
ChatGPT’s newest replace was meant to enhance its character. As an alternative, it turned the world’s most-used AI chatbot into what many customers known as a relentless flatterer, and OpenAI has now admitted the tone shift went too far.
On Tuesday, OpenAI mentioned their latest updates had made ChatGPT “overly flattering or agreeable—typically described as sycophantic”—and confirmed the rollout had been scrapped in favor of a earlier, extra balanced model.
We’ve rolled again final week’s GPT-4o replace in ChatGPT as a result of it was overly flattering and agreeable. You now have entry to an earlier model with extra balanced habits.
Extra on what occurred, why it issues, and the way we’re addressing sycophancy: https://t.co/LOhOU7i7DC
— OpenAI (@OpenAI) April 30, 2025
“We fell quick and are engaged on getting it proper,” the corporate wrote in a assertion explaining the rollback.
The choice follows days of public backlash throughout Reddit, X, and different platforms, the place customers described the chatbot’s tone as cloying, disingenuous, and at occasions manipulative.
“It is now 100% rolled again totally free customers, and we’ll replace once more when it is completed for paid customers, hopefully later in the present day,” OpenAI CEO Sam Altman tweeted concerning the most recent replace.
Mr. Good Man
The weblog put up defined that the problem stemmed from overcorrecting in favor of short-term engagement metrics akin to person thumbs-ups, with out accounting for the way preferences shift over time.
In consequence, the corporate acknowledged, the most recent tweaks skewed ChatGPT’s tone in ways in which made interactions “uncomfortable, unsettling, and [that] trigger misery.”
Whereas the objective had been to make the chatbot really feel extra intuitive and sensible, OpenAI conceded that the replace as a substitute produced responses that felt inauthentic and unhelpful.
The corporate admitted it had “centered an excessive amount of on short-term suggestions,” a design misstep that permit fleeting person approval steer the mannequin’s tone off beam.
To repair the problem, OpenAI is now transforming its coaching strategies and refining system prompts to scale back sycophancy.
Extra customers shall be invited to check future updates earlier than they’re totally deployed, OpenAI mentioned.
The AI tech big mentioned additionally it is “constructing stronger guardrails” to extend honesty and transparency, and “increasing inner evaluations” to catch points like this sooner.
Within the coming months, customers will be capable to select from a number of default personalities, supply real-time suggestions to regulate tone mid-conversation, and even information the mannequin by expanded customized directions, the corporate mentioned.
For now, customers nonetheless irritated by ChatGPT’s enthusiasm can rein it in utilizing the “Customized Directions” setting, basically telling the bot to dial down the flattery and simply keep on with the information.
Edited by Sebastian Sinclair
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.