Anthropic, the creator of the AI software program Claude, has sued the Trump administration for what it says is an “illegal marketing campaign of retaliation” after the corporate refused to permit the navy unrestricted use of its expertise.
Anthropic sued a number of authorities companies and officers in a California federal court docket on Monday, asking the court docket to reverse the Division of Protection’s resolution to label the corporate a “provide chain threat.”
It additionally seeks to overturn US President Donald Trump’s directive to federal workers to cease utilizing Claude. Anthropic additionally filed swimsuit in a Washington, D.C., appeals court docket to problem the Protection Division’s resolution.
“These actions are unprecedented and illegal,” Anthropic argued. “The Structure doesn’t permit the federal government to wield its monumental energy to punish an organization for its protected speech.”
Claude “by no means examined” for makes use of needed by Pentagon
Final month, Protection Secretary Pete Hegseth, who is called within the lawsuit, moved to label Anthropic as a provide chain threat, which was finalized on March 3, which means any individual or enterprise doing enterprise with the navy can’t additionally cope with Anthropic.
It’s the first time an American firm has been designated a provide chain threat, a label often reserved for firms tied to overseas adversaries.
The US authorities and the Pentagon have used Anthropic since 2024, and the corporate’s expertise is the primary AI to be deployed to be used in categorized work.
Anthropic mentioned that Hegseth’s resolution got here after he demanded the corporate “discard its utilization restrictions altogether,” however Anthropic maintained its expertise shouldn’t be used for deadly autonomous warfare and mass surveillance of People, clauses that have been at all times a part of its authorities contracts.

“Anthropic has by no means examined Claude for these makes use of,” the corporate mentioned in its lawsuit. “Anthropic at present doesn’t believe, for instance, that Claude would perform reliably or safely if used to help deadly autonomous warfare.”
Associated: US navy used Anthropic in Iran strike regardless of ban order by Trump: WSJ
Anthropic’s lawsuit additionally named the US Treasury and its secretary, Scott Bessent, the State Division, and Secretary of State Marco Rubio, together with 17 different authorities companies and officers.
A gaggle of greater than 30 AI engineers and scientists from OpenAI and Google, together with the latter’s chief scientist, Jeff Dean, additionally filed a authorized transient in help of Anthropic on Monday.
“If allowed to proceed, this effort to punish one of many main U.S. AI firms will undoubtedly have penalties for the US’ industrial and scientific competitiveness within the discipline of synthetic intelligence and past,” the group wrote.
AI Eye: 9 weirdest AI tales from 2025
