Synopsis
A federal judge temporarily stopped the Department of Defense from calling Anthropic a security risk, allowing federal contracts to continue. The dispute involves a $200 million deal and limits on AI use in warfare. Anthropic filed lawsuits, claiming harm and rights violations, while the Pentagon argued national security concerns remain.In the ruling, Judge Rita F. Lin of U.S. District Court for the Northern District of California said Anthropic would not be restricted from continuing with its federal contracts for now. The ruling is not a final decision, as the case continues.
The case stems from a dispute between the Pentagon and Anthropic this year over a $200 million contract and the use of AI in warfare. During the contract negotiations, Anthropic wanted certain limits imposed on its AI's being used for surveillance and autonomous weapons, while the Department of Defense argued that no private contractor could tell it how to use technology.
Defense Secretary Pete Hegseth then labelled Anthropic a "supply chain risk," a designation typically applied to foreign companies that pose national security concerns. The label effectively blacklists a company from doing business with U.S. government entities.
Anthropic subsequently filed two lawsuits -- one in the court in California and one in the U.S. Court of Appeals for the District of Columbia Circuit -- accusing the Pentagon of using the designation inappropriately to punish the company on ideological grounds.
The outcome has implications for AI use in war and raises questions about whether the Trump administration could assign the same label to other technology companies that may disagree with how the government is using their products. Microsoft and some employees of OpenAI and Google filed amicus briefs in support of Anthropic in the case.
An Anthropic spokesperson said that the company was "grateful to the court for moving swiftly" and that its focus "remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."
A Department of Defense spokesperson did not immediately respond to a request for comment.
Anthropic, which is led by Dario Amodei, has long said AI must have limits for safety reasons. In the case, Anthropic argued that the Pentagon was not only punishing it on ideological grounds but also violating its First Amendment rights by slapping the label on it after the company spoke out.
Anthropic added that its business was suffering "irreparable harm" from the designation, and asked the courts for a temporary stay while it continued to argue its cases.
In response, the Department of Defense said in a legal filing that Anthropic posed an "unacceptable risk" to national security because the startup could disable or alter its technology to suit its own interests, rather than the country's priorities, in a time of war. That meant Anthropic was a supply chain risk, the Department of Defense said.
At a hearing in San Francisco on Tuesday, Lin appeared skeptical of the Pentagon's arguments for applying the label to Anthropic.
"It looks like an attempt to cripple Anthropic," she said.