Politics

Anthropic sues US government over 'unlawful' supply chain risk label

Navigation

Ask Onix

AI firm challenges Pentagon's unprecedented move

Anthropic, a leading artificial intelligence company, filed a federal lawsuit on Monday accusing the U.S. government of overstepping its authority by designating the firm a "supply chain risk"-a first for any American business. The legal action targets multiple agencies and top officials, including Defense Secretary Pete Hegseth and President Donald Trump's executive office.

Dispute over military use of AI tools

The conflict erupted after Anthropic refused to remove restrictions on its AI systems, which prohibit "lethal autonomous warfare" and "mass surveillance of Americans." The company, whose Claude AI platform is widely used by tech giants like Google, Meta, and Microsoft, has worked with the U.S. government since 2024, including on classified projects.

According to the lawsuit, Hegseth demanded the removal of all usage limits from defense contracts. While Anthropic was nearing a compromise to maintain safeguards, negotiations collapsed after Trump publicly criticized the company as "left-wing nut jobs" and ordered federal agencies to cease using its tools.

Pentagon's retaliation and industry fallout

Hegseth swiftly labeled Anthropic a "supply chain risk," effectively barring government contractors from using its AI models. The designation, which the company calls "unprecedented and unlawful," has already disrupted contracts worth hundreds of millions of dollars, Anthropic claims.

White House spokesperson Liz Huston dismissed the lawsuit, calling Anthropic a "radical left, woke company" attempting to dictate military policy. "Our military will obey the Constitution-not a woke AI company's terms of service," she said.

Major tech firms, including Microsoft, Google, and Amazon, have vowed to continue using Claude for non-defense work. However, Anthropic argues the government's actions have caused "irreparable harm" to its reputation and business prospects.

Industry allies rally behind Anthropic

Nearly 40 employees from Google and OpenAI filed a court brief on Monday supporting Anthropic's stance on AI safeguards. The signatories, who described themselves as politically diverse, emphasized the risks of unchecked AI deployment in "domestic mass surveillance" and "autonomous lethal weapons."

"We are united in the conviction that frontier AI systems require guardrails-whether technical or contractual-to prevent misuse,"

Joint statement from Google and OpenAI employees

OpenAI CEO Sam Altman acknowledged rushing a new Pentagon contract in response to Anthropic's dispute, highlighting the broader industry tensions.

Legal battle ahead

Anthropic is not seeking financial damages but asks the court to invalidate Trump's directive as unconstitutional and lift the "supply chain risk" label. Legal experts, however, predict a protracted fight.

"The Trump Administration is likely to take a 'scorched earth' approach, even if Anthropic prevails in lower courts. This could reach the Supreme Court,"

Carl Tobias, University of Richmond School of Law

The Pentagon declined to comment, citing ongoing litigation.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed