Politics

Federal judge blocks Pentagon from banning Anthropic AI tools

Navigation

Ask Onix

Court halts government ban on Anthropic AI

A California federal judge has temporarily blocked the Pentagon from enforcing directives that would have barred federal agencies from using artificial intelligence tools developed by Anthropic. Judge Rita Lin ruled Thursday that the orders, issued by President Donald Trump and Defense Secretary Pete Hegseth, appeared to violate the company's First Amendment rights.

Judge questions government motives

In her ruling, Judge Lin accused the government of attempting to "cripple Anthropic" and "chill public debate" over its technology. She described the actions as "classic First Amendment retaliation," noting that public statements by Trump and Hegseth had labeled the company "woke" and its employees "left-wing nut jobs" rather than citing security concerns.

"If this were merely a contracting impasse, the Department of Defense would presumably have just stopped using Claude," Lin wrote, referring to Anthropic's flagship AI model. "The challenged actions, however, far exceed what could reasonably address a national security interest."

Background of the dispute

Anthropic filed the lawsuit earlier this month after the Pentagon designated the company a "supply chain risk"-a label historically reserved for firms based in adversarial nations. The move followed public criticism from Trump and Hegseth, who accused Anthropic of posing a security threat without providing evidence of vulnerabilities in its systems.

The Pentagon argued that the designation stemmed from Anthropic's refusal to accept new contract terms for a $200 million deal. The revised agreement would have allowed the government to use Anthropic's tools for "any lawful use," a provision the company feared could enable mass surveillance of U.S. citizens or the development of fully autonomous weapons.

What the ruling means

The temporary injunction allows federal agencies and military contractors to continue using Anthropic's AI tools, including Claude, while the lawsuit proceeds. The company's legal team welcomed the decision but emphasized its commitment to collaborating with the government on "safe, reliable AI" that benefits all Americans.

"Our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."

Anthropic spokeswoman

Neither the White House nor the Department of Defense responded to requests for comment on the ruling.

Contract negotiations collapse

Tensions escalated in February when Hegseth issued an ultimatum for Anthropic to accept the Pentagon's revised terms. The company declined, citing ethical concerns over the broad language of the proposed contract. Anthropic's CEO, Dario Amodei, had previously warned that the "any lawful use" clause could lead to unintended applications of its technology.

The lawsuit marks the first time a U.S.-based company has been publicly labeled a supply chain risk, a designation that typically triggers immediate bans on government use of a vendor's products or services.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed