In short
- Dario Amodei says Anthropic won’t take away bans on mass home surveillance and totally autonomous weapons.
- The Pentagon has threatened contract termination and attainable motion below the Protection Manufacturing Act.
- The standoff follows studies that the U.S. navy used Claude to seize former Venezuelan President Nicolás Maduro
Anthropic CEO Dario Amodei stated Thursday the corporate won’t take away safeguards from its Claude AI mannequin, escalating a dispute with the U.S. Division of Protection over how the know-how can be utilized in categorized navy methods.
The assertion comes because the Protection Division opinions its relationship with Anthropic and weighs potential penalties, together with cancellation of the corporate’s $200 million contract and attainable invocation of the Protection Manufacturing Act.
“We can’t in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors allow use of their methods for “any lawful use.”
Whereas the Pentagon has since required AI distributors to undertake customary “any lawful use” language in future agreements, Anthropic remained the one frontier AI agency that resisted turning over management of its AI to the navy.
On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted navy use of Claude. The deadline reportedly is Friday of this week.
“It’s the Division’s prerogative to pick out contractors most aligned with their imaginative and prescient,” Amodei continued. “However given the substantial worth that Anthropic’s know-how gives to our armed forces, we hope they rethink.”
In his assertion, Amodei framed the corporate’s stance as aligned with U.S. nationwide safety objectives.
“I consider deeply within the existential significance of utilizing AI to defend the US and different democracies, and to defeat our autocratic adversaries,” he stated.
He added that Claude is “extensively deployed throughout the Division of Battle and different nationwide safety businesses for intelligence evaluation, modeling and simulation, operational planning, cyber operations, and extra.”
Battle on AI
The dispute unfolds towards broader considerations about how superior AI methods behave in high-stakes navy situations. In a current King’s School London examine, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.
Throughout a speech at SpaceX’s Starbase in Texas in January, Protection Secretary Pete Hegseth stated the U.S. navy plans to deploy probably the most superior AI fashions.
That very same month, studies surfaced that Claude was used throughout a U.S. operation to seize former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any particular navy operations.
“Anthropic understands that the Division of Battle, not personal firms, makes navy choices,” he stated. “We now have by no means raised objections to explicit navy operations nor tried to restrict use of our know-how in an advert hoc method.”
Regardless of this, Amodei stated utilizing these methods for mass home surveillance or autonomous weapons is incompatible with democratic values and presents severe dangers.
“Right now, frontier AI methods are merely not dependable sufficient to energy totally autonomous weapons,” he stated. “We won’t knowingly present a product that places America’s warfighters and civilians in danger.”
He additionally addressed the Pentagon’s risk to designate Anthropic a “provide chain danger” whereas additionally doubtlessly invoking the Protection Manufacturing Act.
“These latter two threats are inherently contradictory: one labels us a safety danger; the opposite labels Claude as important to nationwide safety,” he stated.
Whereas Amodei has stated the corporate won’t adjust to the Pentagon’s request, on the identical time, Anthropic has revised its Accountable Scaling Coverage, dropping a pledge to halt coaching of superior methods with out assured safeguards in place.
Robert Weissman, co-president of Public Citizen, stated the Pentagon’s posture alerts broader stress on the tech trade.
“The Pentagon is publicly bullying Anthropic, and the general public half is intentional, as a result of they need to stress this explicit firm and ship a message to all huge tech and all firms that we intend to do and take no matter we would like and don’t get in our means,” Weissman instructed Decrypt.
Weissman described Anthropic’s guardrails as “modest” and geared toward stopping “improper surveillance of American folks or to facilitate the event and deployment of killer robots, AI-enabled weaponry that would launch deadly strikes with out people say so.”
“These are probably the most wise and modest guardrails you would provide you with relating to this highly effective new know-how.”
Concerning the Pentagon’s risk of designating Anthropic a “provide chain danger,” Weissman referred to as it a doubtlessly crushing penalty from the federal government, and argued it will stress different AI companies to keep away from imposing related limits.
“People may use Claude, however not one of the AI firms, notably Anthropic, have enterprise fashions based mostly on particular person use; they’re in search of enterprise use,” he stated. “It is a doubtlessly crushing penalty from the federal government.”
Whereas the Pentagon has not but stated whether or not it plans to undergo with its risk to terminate the contract or invoke the Protection Manufacturing Act, Weissman stated the Pentagon is signaling to AI firms that it expects unrestricted entry to their know-how as soon as it’s deployed in authorities methods.
“The message of the Pentagon is, ‘we’re not going to tolerate this, and we anticipate to have the ability to use the know-how because it’s invented for any function we would like,’” Weissman stated.
The Division of Protection and Anthropic didn’t instantly reply to Decrypt’s requests for remark.
Day by day Debrief E-newsletter
Begin daily with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

