In short
- OpenAI signed an settlement with the Pentagon to deploy AI in labeled environments.
- The agency mentioned it imposed “crimson strains,” however the contract permits “all lawful functions,” a typical that finally relies on the federal government’s personal interpretation.
- The controversy sparked the QuitGPT motion and drove a surge in Claude downloads.
OpenAI mentioned this weekend that it reached an settlement with the Pentagon to deploy superior AI programs in labeled environments, marking a big enlargement of the corporate’s work with the U.S. army.
The announcement got here lower than 24 hours after the Trump administration blacklisted Anthropic, designating the rival AI agency a “provide chain danger to nationwide safety” following a dispute over contract language associated to surveillance and autonomous weapons.
President Donald Trump additionally directed federal companies to right away stop utilizing Anthropic’s expertise, with Treasury Secretary Scott Bessent writing Monday on X that the company “is terminating all use of Anthropic merchandise, together with using its Claude platform, inside our division.”
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That call belongs to YOUR COMMANDER-IN-CHIEF, and the great leaders I appoint to run our Army.
The Leftwing nut jobs at Anthropic… pic.twitter.com/aIEx92nnyx
— The White Home (@WhiteHouse) February 27, 2026
The timing of the AI bulletins positioned OpenAI’s deal below intense scrutiny. In an in depth weblog publish, the corporate outlined what it described as agency “crimson strains” and layered safeguards governing its Pentagon partnership.
The settlement, as offered by OpenAI, raises broader questions on how AI programs will likely be ruled in nationwide safety settings, and the way the corporate’s acknowledged restrictions will likely be interpreted and enforced in observe.
When “lawful” isn’t sufficient
OpenAI’s weblog publish opens with three commitments framed as non-negotiable: no use of its expertise for mass home surveillance, to independently direct autonomous weapons programs, or for high-stakes automated choices like social credit score scoring.
Then comes the precise contract language—which OpenAI notably calls “the related language,” not “the total settlement.”
“The Division of Battle could use the AI system for all lawful functions, in step with relevant regulation, operational necessities, and well-established security and oversight protocols,” OpenAI mentioned.
That’s the actual phrase Anthropic mentioned the federal government had been demanding all through negotiations. The precise phrase that Anthropic refused to go together with. OpenAI signed it, but argues its crimson strains stay absolutely intact.
Nonetheless, “lawful” in nationwide safety contexts is not a set boundary—it lives inside a patchwork of statutes, govt orders, inner directives, and infrequently labeled authorized interpretations. When a contract grants “all lawful functions,” the sensible restrict turns into the federal government’s present authorized envelope, not an impartial normal set by the seller.
A cluster of clauses
The weapons provision reads that the AI system “won’t be used to independently direct autonomous weapons in any case the place regulation, regulation, or division coverage requires human management.”
The prohibition solely applies the place another authority already requires human management—it borrows its tooth totally from present coverage, particularly DoD Directive 3000.09. That directive requires autonomous programs to permit commanders to train “acceptable ranges of human judgment over using drive.”
And “acceptable” is as subjective as will be.
Human judgment isn’t human management. This distinction was not unintentional. Protection students have famous that omitting “human-in-the-loop” language was deliberate, exactly to protect operational flexibility.
OpenAI’s strongest counterargument is its cloud-only deployment structure—absolutely autonomous deadly choice loops would require edge deployment on battlefield gadgets, which this contract would not allow. That is an actual technical constraint.
However cloud-based AI can nonetheless carry out goal identification, pattern-of-life evaluation, and mission planning. These are kill-chain actions no matter the place the ultimate set off sits. The end result for a goal would not differ primarily based on which server the mannequin runs on.
The surveillance clause follows an analogous sample. OpenAI’s acknowledged crimson line: no mass home surveillance. The contract language: The system “shall not be used for unconstrained monitoring of U.S. individuals’ non-public data as in step with these authorities”—then lists the Fourth Modification, FISA, and Govt Order 12333.
The phrase “unconstrained” implies a constrained model of mass surveillance could be permissible. And EO 12333 is the manager order the NSA has used to justify intercepting Individuals’ communications when executed exterior U.S. borders.
And that is the place Anthropic’s issues about wording all through the negotiations turns into noticeable. Anthropic’s argument was that present regulation hasn’t caught up with what AI makes potential. The federal government can legally buy huge quantities of aggregated business information about Individuals with out a warrant—and has already executed so.
OpenAI’s contract language, by anchoring its protections to present authorized frameworks, could not shut the hole Anthropic was really fearful about.
Altman responds
On Saturday evening, Altman held an AMA responding to hundreds of questions concerning the deal. When requested what would trigger OpenAI to stroll away from a authorities partnership, he answered: “If we had been requested to do one thing unconstitutional or unlawful, we’ll stroll away.”
If we had been requested to do one thing unconstitutional or unlawful, we’ll stroll away. Please come go to me in jail if vital.
— Sam Altman (@sama) March 1, 2026
That framing locations OpenAI’s restrict at legality—not at an impartial moral judgment about what the corporate will or will not allow if it occurs to be authorized, which is what Anthropic defends. Requested whether or not he fearful about future disputes over what counts as “authorized,” he acknowledged the danger: “If we’ve to tackle that struggle we’ll, nevertheless it clearly exposes us to some danger.”
On why OpenAI reached a deal the place Anthropic couldn’t, Altman provided this: “Anthropic appeared extra centered on particular prohibitions within the contract, quite than citing relevant legal guidelines, which we felt snug with. I might clearly quite depend on technical safeguards if I solely needed to choose one. I feel Anthropic could have wished extra operational management than we did.”
That is a substantive philosophical distinction. Anthropic argued that as a result of frontier fashions will be repurposed for intelligence and army workflows in methods which might be onerous to anticipate, the boundaries have to be specific and binding in writing, even at the price of the deal. OpenAI’s place is that technical structure, embedded personnel, and present regulation collectively represent a stronger safeguard than contractual textual content alone.
The general public picked a facet

The backlash was quick. By Monday, the “QuitGPT” motion claimed that over 1.5 million individuals had taken motion—canceling subscriptions, sharing boycott posts, or signing up at quitgpt.org.
The marketing campaign framed OpenAI’s transfer as prioritizing army contracts over person security, accusing the corporate of agreeing to let the Pentagon use its expertise for “any lawful objective, together with killer robots and mass surveillance.”
OpenAI would possibly contest that characterization. However the market moved regardless.
Anthropic’s Claude surged previous ChatGPT to develop into probably the most downloaded free app in the USA on Apple’s App Retailer, with the corporate telling Decrypt that it noticed file every day signups over the weekend.
Pop star Katy Perry shared a screenshot of Claude’s pricing web page on X. Tons of of customers documented their subscription cancellations publicly on Reddit. Graffiti praising Anthropic appeared exterior its San Francisco places of work, whereas chalk assaults lined OpenAI’s sidewalks. Even tons of of OpenAI’s personal workers had beforehand signed an open letter supporting Anthropic’s refusal to accede to Pentagon calls for.

The QuitGPT framing is emotionally compelling, however not totally exact. Anthropic itself has a partnership with Palantir and Amazon Internet Companies that grants U.S. intelligence companies and protection departments entry to Claude fashions, and has allegedly been utilized in army operations to overthrow the governments of Venezuela and Iran. The ethics of AI and nationwide safety contracting had been by no means clear on both facet.
What the marketing campaign captured, precisely, is that a big phase of customers believed there was a significant distinction between how the 2 corporations drew their limits—and voted with their subscriptions.
Whether or not that distinction is as significant because it seems requires studying the contract fastidiously.
Every day Debrief E-newsletter
Begin on daily basis with the highest information tales proper now, plus authentic options, a podcast, movies and extra.
