Anthropic Challenges Pentagon’s AI Ban in Court

technology defense & military artificial intelligence

Anthropic is challenging the Pentagon’s decision to designate it as a national security and supply chain risk. The AI company, maker of the Claude chatbot, is seeking a temporary halt to the designation in federal court.

During a hearing, a judge questioned the Department of Defense’s motivations for the labeling. Anthropic alleges the move is unlawful retaliation for refusing to relax AI safety restrictions for military applications.

The company claims the government’s actions are “unprecedented and stigmatizing” and could result in billions of dollars in lost revenue. Anthropic is suing to block the designation, arguing it unfairly impacts their business.

Judge says government's Anthropic ban looks like punishment

npr.org

Judge Calls US Government Ban on Anthropic AI Tools ‘Troubling’

bloomberg.com

Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says

wired.com

U.S. Government’s Ban on Anthropic Looks Like Punishment, Judge Says

wsj.com

Anthropic challenges US Pentagon’s ban in San Francisco court showdown

aljazeera.com

Anthropic and Pentagon head to court as AI firm seeks end to 'stigmatizing' supply chain risk label

abcnews.com

Anthropic v. US Department of War: AI company challenges government in court

euronews.com