Anthropic vs Pentagon: Court grants injunction, questions govt’s move to ‘cripple’ AI firm

Anthropic vs Pentagon: Court grants injunction, questions govt’s move to ‘cripple’ AI firm

A federal judge in San Francisco has granted a preliminary injunction in favour of artificial intelligence company Anthropic, marking a significant early victory in its legal confrontation with the Trump administration and the Pentagon.

The ruling temporarily halts US government actions that had effectively barred the federal use of AI firm Anthropic, as a broader legal contest unfolds.

Court Blocks Pentagon Blacklisting Amid Legal Scrutiny

Judge Rita Lin issued the decision following a contentious hearing on Tuesday, during which lawyers for both Anthropic and the US government presented arguments over the legality of the company’s designation as a national security risk. The injunction pauses the enforcement of restrictions that had prohibited federal agencies from using Anthropic’s Claude models.

The case centres not merely on procurement decisions, but on whether the government overstepped legal boundaries in its treatment of the company. A final judgment, however, remains months away.

During proceedings, Lin expressed scepticism regarding the rationale behind the government’s actions, suggesting that the measures appeared punitive rather than procedural.

“One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Lin said.

‘Supply Chain Risk’ Label Sparks Industry Alarm

The dispute began after the Department of Defense formally designated Anthropic a “supply chain risk”, a classification typically reserved for foreign adversaries. Defence Secretary Pete Hegseth had earlier declared that the company’s technology posed a threat to US national security.

The designation carries sweeping consequences. Defence contractors, including Amazon, Microsoft and Palantir, are now required to certify that they do not use Anthropic’s Claude models in military-related work. Anthropic is the first US-based firm to be publicly subjected to such a classification, raising concerns across the technology sector.

Anthropic has challenged the designation under two separate statutory frameworks-10 U.S.C. § 3252 and 41 U.S.C. § 4713- necessitating parallel legal actions, including an appeal filed in Washington, D.C.

Trump Directive Intensifies Conflict Over AI Governance

The legal conflict escalated further following a directive from President Donald Trump, who ordered federal agencies to discontinue the use of Anthropic’s technology. In a post on Truth Social, he mandated an immediate cessation, alongside a six-month phase-out period for agencies such as the Department of Defense.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

The administration’s stance has surprised many within Washington, particularly given Anthropic’s prior integration into sensitive defence systems. The company had been among the first to deploy AI models across classified Pentagon networks and was widely regarded as a reliable partner capable of working alongside established defence contractors.

Contract Breakdown Reveals Ethical Fault Lines in Military AI

At the heart of the dispute lies a failed negotiation over a $200 million Pentagon contract signed in July. Talks broke down months later over fundamental disagreements regarding the permissible use of Anthropic’s technology.

The Department of War reportedly sought unrestricted access to the Anthropic’s AI models for all lawful applications. Anthropic, by contrast, sought explicit guarantees that its systems would not be deployed for fully autonomous weapons or domestic mass surveillance.

The impasse reflects a broader tension within the AI industry, where ethical considerations increasingly intersect with national security priorities.

Legal Battle Shifts Focus to US Government's Authority

Judge Lin underscored that the case is not about the government’s right to choose its contractors, but about the legality of its actions in excluding Anthropic.

“Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor,” Lin said during the hearing Tuesday. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.”

The ruling ensures that, for now, Anthropic is shielded from what it argues would be severe financial and reputational harm. Yet the broader implications—touching on executive authority, AI governance, and the limits of national security claims—remain unresolved.

As the case proceeds, it is poised to become a defining legal test for how governments regulate artificial intelligence companies operating at the frontier of defence technology.