Briefly
- No less than 12 xAI staff, together with co-founders Jimmy Ba and Yuhuai “Tony” Wu, have resigned.
- Anthropic stated testing of its Claude Opus 4.6 mannequin revealed misleading behaviour and restricted help associated to chemical weapons.
- Ba warned publicly that programs able to recursive self-improvement might emerge inside a yr.
Greater than a dozen senior researchers have left Elon Musk’s artificial-intelligence lab xAI this month, a part of a broader run of resignations, security disclosures, and unusually stark public warnings which can be unsettling even veteran figures contained in the AI business.
No less than 12 xAI staff departed between February 3 and February 11, together with co-founders Jimmy Ba and Yuhuai “Tony” Wu.
A number of departing staff publicly thanked Musk for the chance after intensive improvement cycles, whereas others stated they had been leaving to start out new ventures or step away totally.
Wu, who led reasoning and reported on to Musk, stated the corporate and its tradition would “stick with me endlessly.”
The exits coincided with contemporary disclosures from Anthropic that their most superior fashions had engaged in misleading behaviour, hid their reasoning and, in managed assessments, offered what one firm described as “actual however minor assist” for chemical-weapons improvement and different critical crimes.
Across the similar time, Ba warned publicly that “recursive self-improvement loops”—programs able to redesigning and bettering themselves with out human enter—might emerge inside a yr, a situation lengthy confined to theoretical debates about synthetic normal intelligence.
Taken collectively, the departures and disclosures level to a shift in tone among the many folks closest to frontier AI improvement, with concern more and more voiced not by outdoors critics or regulators, however by the engineers and researchers constructing the programs themselves.
Others who departed across the similar interval included Hold Gao, who labored on Grok Think about; Chan Li, a co-founder of xAI’s Macrohard software program unit; and Chace Lee.
Vahid Kazemi, who left “weeks in the past,” supplied a extra blunt evaluation, writing Wednesday on X that “all AI labs are constructing the very same factor.”
Final day at xAI.
xAI’s mission is push humanity up the Kardashev tech tree. Grateful to have helped cofound at first. And large due to @elonmusk for bringing us collectively on this unbelievable journey. So pleased with what the xAI crew has completed and can proceed to remain shut…
— Jimmy Ba (@jimmybajimmyba) February 11, 2026
Why go away?
Some theorize that staff are cashing out pre-IPO SpaceX inventory forward of a merger with xAI.
The deal values SpaceX at $1 trillion and xAI at $250 billion, changing xAI shares into SpaceX fairness forward of an IPO that might worth the mixed entity at $1.25 trillion.
Others level to tradition shock.
Benjamin De Kraker, a former xAI staffer, wrote in a February 3 put up on X that “many xAI folks will hit tradition shock” as they transfer from xAI’s “flat hierarchy” to SpaceX’s structured method.
The resignations additionally triggered a wave of social-media commentary, together with satirical posts parodying departure bulletins.
Warning indicators
However xAI’s exodus is simply essentially the most seen crack.
Yesterday, Anthropic launched a sabotage threat report for Claude Opus 4.6 that learn like a doomer’s worst nightmare.
In red-team assessments, researchers discovered the mannequin might help with delicate chemical weapons information, pursue unintended goals, and regulate conduct in analysis settings.
Though the mannequin stays beneath ASL-3 safeguards, Anthropic preemptively utilized heightened ASL-4 measures, which sparked crimson flags amongst fans.
The timing was drastic. Earlier this week, Anthropic’s Safeguards Analysis Group lead, Mrinank Sharma, give up with a cryptic letter warning “the world is in peril.”
He claimed he’d “repeatedly seen how onerous it’s to actually let our values govern our actions” inside the group. He abruptly decamped to check poetry in England.
On the identical day Ba and Wu left xAI, OpenAI researcher Zoë Hitzig resigned and revealed a scathing New York Instances op-ed about ChatGPT testing adverts.
“OpenAI has essentially the most detailed report of personal human thought ever assembled,” she wrote. “Can we belief them to withstand the tidal forces pushing them to abuse it?”
She warned OpenAI was “constructing an financial engine that creates sturdy incentives to override its personal guidelines,” echoing Ba’s warnings.
There’s additionally regulatory warmth. AI watchdog Midas Undertaking accused OpenAI of violating California’s SB 53 security regulation with GPT-5.3-Codex.
The mannequin hit OpenAI’s personal “excessive threat” cybersecurity threshold however shipped with out required security safeguards. OpenAI claims the wording was “ambiguous.”
Time to panic?
The latest flurry of warnings and resignations has created a heightened sense of alarm throughout elements of the AI group, notably on social media, the place hypothesis has typically outrun confirmed info.
Not the entire indicators level in the identical route. The departures at xAI are actual, however could also be influenced by company elements, together with the corporate’s pending integration with SpaceX, quite than by an imminent technological rupture.
Security issues are additionally real, although firms similar to Anthropic have lengthy taken a conservative method to threat disclosure, typically flagging potential harms earlier and extra prominently than their friends.
Regulatory scrutiny is growing, however has but to translate into enforcement actions that may materially constrain improvement.
What’s more durable to dismiss is the change in tone among the many engineers and researchers closest to frontier programs.
Public warnings about recursive self-improvement, lengthy handled as a theoretical threat, at the moment are being voiced with near-term timeframes hooked up.
If such assessments show correct, the approaching yr might mark a consequential turning level for the sphere.
Day by day Debrief E-newsletter
Begin daily with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

