Briefly
- A federal decide has blocked the Pentagon from labeling Anthropic a provide chain threat, discovering the transfer seemingly violated the corporate’s First Modification and due course of rights.
- The dispute stemmed from a $200 million Protection Division AI contract that collapsed after Anthropic refused to permit use of its mannequin for mass surveillance or deadly autonomous warfare.
- The ruling quickly restores Anthropic’s standing with federal contractors and will form how AI companies set utilization limits in authorities offers.
A federal decide has blocked the Pentagon from labeling Anthropic as a provide chain threat, ruling Thursday that the federal government’s marketing campaign towards the AI firm violated its First Modification and due course of rights.
U.S. District Decide Rita Lin issued a preliminary injunction from the Northern District of California two days after listening to oral arguments from each side, in a case observers say was made inevitable by the federal government’s personal paperwork.
“Nothing within the governing statute helps the Orwellian notion that an American firm could also be branded a possible adversary and saboteur of the U.S. for expressing disagreement with the federal government,” Decide Lin wrote.
The inner file was deadly to the federal government’s case, in accordance with Andrew Rossow, public affairs lawyer and CEO of AR Media Consulting, who informed Decrypt that the designation was “triggered by press conduct, not a safety evaluation.”
“The federal government primarily wrote down its personal motive, and it was retaliation,” Rossow mentioned.
The dispute facilities on a two-year, $200 million contract awarded to Anthropic in July 2025 by the Division of Struggle’s Chief Digital and Synthetic Intelligence Workplace.
Negotiations to deploy Claude to the division’s GenAI.Mil platform broke down after the 2 sides didn’t agree on utilization restrictions.
Anthropic insisted on two circumstances: that Claude not be used for mass surveillance of People or for deadly use in autonomous warfare, arguing the mannequin was not but secure for both goal.
At a February 24 assembly, Secretary of Struggle Pete Hegseth informed Anthropic’s representatives that if the corporate didn’t drop its restrictions by February 27, the division would instantly designate it a provide chain threat.
Anthropic refused to conform.
On the identical day, President Trump posted a directive on Fact Social ordering each federal company to “instantly stop” utilizing the corporate’s know-how, calling Anthropic a “radical left, woke firm.”
A bit of over an hour later, Hegseth described Anthropic’s stance as a “grasp class in conceitedness and betrayal,” ordering that no contractor doing enterprise with the army could conduct business exercise with the agency. The formal provide chain designation adopted by a letter on March 3.
Anthropic sued the federal government on March 9, alleging violations of the First Modification, due course of, and the Administrative Process Act.
“Punishing Anthropic for bringing public scrutiny to the federal government’s contracting place is traditional unlawful First Modification retaliation,” Decide Lin wrote in Thursday’s order.
The order, which was stayed for seven days, blocks all three authorities actions, requires a compliance report by April 6, and restores the established order earlier than the occasions of February 27.
Weaponizing the legislation
The designation of being a “provide chain threat” has been traditionally reserved for overseas intelligence companies, terrorists, and different hostile actors.
It had by no means been utilized to a home firm earlier than Anthropic. Protection contractors started assessing and in lots of instances terminating their reliance on Anthropic within the weeks that adopted, Decide Lin’s order famous.
And the federal government’s posturing might have unexpected penalties, specialists argue.
Certainly, Thursday’s ruling might push AI firms “to formalize moral guardrails when working with governments,” Pichapen Prateepavanich, coverage strategist and founding father of infrastructure agency Collect Past, informed Decrypt.
To some extent, the ruling additionally means that firms “can set clear utilization limits with out routinely triggering punitive regulatory motion,” she mentioned.
However this “doesn’t take away the strain,” she added. What the ruling limits is “the flexibility to escalate that disagreement into broader exclusion or labeling that appears retaliatory.”
Nonetheless, the appliance of present statutory authority for designating an organization as a provide chain threat “as a result of it refused to take away security guardrails” is just not an extension of the provide chain threat statute, Rossow defined. As an alternative, it operates as a “weaponization” of the legislation.
“That is a part of an ongoing sample of habits by the White Home every time they’re challenged, leading to disproportional, emotionally-driven and biased threats and authorities extortion,” he added.
If the federal government’s “idea” is accepted, it might create a “harmful” precedent by which AI companies might be blacklisted for security insurance policies the federal government dislikes, “earlier than any hurt happens,” with out due course of, beneath the banner of nationwide safety, Rossow mentioned.
Day by day Debrief E-newsletter
Begin each day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

