Briefly
- Anthropic sued federal companies after being labeled a nationwide safety “provide chain threat.”
- The dispute stems from the corporate’s refusal to permit unrestricted navy use of its AI.
- The designation bars Pentagon contractors from doing enterprise with the agency.
Anthropic has turned to the federal courts to combat a sweeping blacklist by the Donald Trump administration, claiming the federal government branded the AI startup a nationwide safety menace in retaliation for its refusal to chill out security protocols.
The lawsuit, filed Monday in the USA District Courtroom of Northern California, challenges actions taken after President Trump directed federal companies in February to cease utilizing Anthropic’s expertise. This adopted public feedback from Anthropic CEO Dario Amodei, who mentioned the corporate wouldn’t adjust to the Pentagon’s request for unrestricted entry to Claude. The criticism names a number of federal companies and senior officers as defendants, together with Protection Secretary Pete Hegseth, Treasury Secretary Scott Bessent, and Secretary of State Marco Rubio.
“The Structure doesn’t permit the federal government to wield its monumental energy to punish an organization for its protected speech,” attorneys for Anthropic mentioned within the lawsuit. “No federal statute authorizes the actions taken right here. Anthropic turns to the judiciary as a final resort to vindicate its rights and halt the Government’s illegal marketing campaign of retaliation.”
On the course of @POTUS, the @USTreasury is terminating all use of Anthropic merchandise, together with the usage of its Claude platform, inside our division.
The American individuals deserve confidence that each software in authorities serves the general public curiosity, and underneath President Trump… https://t.co/R7rF0ci5CY
— Treasury Secretary Scott Bessent (@SecScottBessent) March 2, 2026
The dispute started in January when Pentagon officers demanded AI contractors permit their programs for use for “any lawful use,” together with navy functions. Whereas Anthropic had by then already entered right into a $200 million contract with the Division of Protection, it refused to take away two safeguards prohibiting the usage of Claude for mass home surveillance of Individuals or for totally autonomous deadly weapons programs.
“The Challenged Actions inflict fast and irreparable hurt on Anthropic; on others whose speech will probably be chilled; on these benefiting from the financial worth the corporate can proceed to create; and on a worldwide public that deserves strong dialogue and debate on what AI means for warfare and surveillance,” attorneys for Anthropic said within the lawsuit.
For AI builders, together with SingularityNET CEO Ben Goertzel, the designation is an odd selection and doesn’t match with the everyday which means of a provide chain menace, one thing normally reserved for software program from adversaries that might comprise hidden malware, viruses, or spyware and adware.
“Anthropic not being prepared to have their software program used for autonomous killing or mass surveillance would not appear to pose a threat of that nature,” Goertzel instructed Decrypt. “That simply means if you wish to use software program for autonomous killing or mass surveillance, then purchase any individual else’s software program. So the logic of constructing it a provide chain threat eludes me.”
Goertzel mentioned variations amongst main AI fashions could restrict the sensible influence of the choice.
“Ultimately, Claude, ChatGPT, and Gemini aren’t that far off from one another,” he mentioned. “So long as one among these prime programs is being utilized by the U.S. authorities, it is all about the identical factor. And the intelligence companies, underneath the cloak of prime secret clearance, would use the software program nonetheless they needed.”
Anthropic is asking the court docket to declare the federal government’s actions illegal and block enforcement of the “provide chain threat” designation that forestalls federal companies and Pentagon contractors from doing enterprise with the corporate.
“There isn’t any legitimate justification for the Challenged Actions,” the lawsuit mentioned. “The Courtroom ought to declare them illegal and enjoin Defendants from taking any steps to implement them.”
Anthropic didn’t instantly reply to requests for remark by Decrypt.
Even after designating Anthropic a threat to nationwide safety, Claude has been utilized in ongoing navy operations, together with by U.S. Central Command to assist analyze intelligence and establish targets throughout strikes on Iran.
Jennifer Huddleston, a senior fellow in expertise coverage on the Cato Institute, mentioned in a press release shared with Decrypt that the case raises issues about constitutional protections when nationwide safety claims are used to justify authorities motion.
“Whereas the courts have been hesitant prior to now to query the federal government’s claims of nationwide safety issues, the circumstances of this case definitely spotlight the actual threat to the First Modification rights of Individuals if the underlying concerns of such claims aren’t totally scrutinized,” she mentioned.
Every day Debrief E-newsletter
Begin on daily basis with the highest information tales proper now, plus unique options, a podcast, movies and extra.

