Briefly
- U.S. Central Command reportedly used Anthropic’s Claude for intelligence assessments, goal identification, and battle simulation through the Iran strikes.
- Specialists warn the six-month phase-out timeline understates the true price of changing an AI mannequin embedded throughout categorised defence pipelines.
- OpenAI made a cope with the Pentagon following Anthropic’s fallout.
Hours after President Donald Trump ordered federal businesses to halt use of Anthropic’s AI instruments, the U.S. army carried out a serious airstrike on Iran that reportedly relied on the corporate’s Claude platform.
U.S. Central Command used Claude for intelligence assessments, goal identification, and simulating battle situations through the Iran strikes, individuals accustomed to the matter confirmed to the Wall Avenue Journal on Saturday.
It got here regardless of Trump’s directive on Friday that businesses start a six-month phase-out of Anthropic merchandise following a breakdown in negotiations between the corporate and the Pentagon over how the latter can use commercially developed AI methods.
Decrypt has reached out to the Division of Protection and Anthropic for remark.
“When AI instruments are already embedded in dwell intelligence and simulation methods, choices on the prime don’t immediately translate to modifications on the bottom,” Midhun Krishna M, co-founder and CEO of LLM price tracker TknOps.io, instructed Decrypt. “There’s a lag—technical, procedural, and human.”
“By the point a mannequin is embedded throughout categorised intelligence and simulation methods, you’re sunk integration prices, retraining, safety re-certifications, and parallel testing, so a six-month phase-out might sound decisive, however the actual monetary and operational burden runs far deeper,” Krishna added.
“Protection businesses will now prioritize mannequin portability and redundancy,” he stated. “No critical army operator needs to find throughout a disaster that its AI layer is politically fragile.”
Anthropic CEO Dario Amodei stated Thursday the corporate wouldn’t strip safeguards stopping Claude from being deployed for mass home surveillance or absolutely autonomous weapons.
“We can not in good conscience accede to their request,” Amodei wrote, after the Protection Division demanded contractors enable their methods for “any lawful use.”
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE attempting to STRONG-ARM the Division of Struggle,” Trump later wrote on Fact Social, ordering businesses to “instantly stop” all use of Anthropic merchandise.
Protection Secretary Pete Hegseth adopted, designating Anthropic a “supply-chain threat to nationwide safety,” a label beforehand reserved for overseas adversaries, barring each Pentagon contractor and companion from industrial exercise with the corporate.
Anthropic known as the designation “unprecedented” and vowed to problem it in courtroom, saying it had “by no means earlier than publicly utilized to an American firm.”
The corporate added that, to its information, the 2 disputed restrictions had not affected a single authorities mission up to now.
“The controversy isn’t about whether or not AI might be utilized in protection, that’s already occurring,” Krishna added. “It’s whether or not frontier labs can keep differentiated guardrails as soon as their methods develop into operational belongings underneath ‘any lawful use’ contracts.”
OpenAI moved rapidly to fill the hole with CEO Sam Altman saying a Pentagon deal on Friday night time overlaying categorised army networks, claiming it included the identical guardrails Anthropic had sought.
Yesterday we reached an settlement with the Division of Struggle for deploying superior AI methods in categorised environments, which we requested they make accessible to all AI firms.
We expect our deployment has extra guardrails than any earlier settlement for categorised AI…
— OpenAI (@OpenAI) February 28, 2026
Requested whether or not the Pentagon’s efficient blacklisting of Anthropic set a troubling precedent for future disputes with AI companies, OpenAI CEO Sam Altman responded on X, “Sure; I feel it’s an especially scary precedent, and I want they dealt with it a distinct means.
“I do not suppose Anthropic dealt with it effectively both, however because the extra highly effective social gathering, I maintain the federal government extra accountable. I’m nonetheless eager for a significantly better decision,” he added.
In the meantime, practically 500 staff from OpenAI and Google signed an open letter warning that the Pentagon was making an attempt to pit AI firms towards one another.
Each day Debrief E-newsletter
Begin every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

