Anthropic is challenging the Pentagon’s decision to designate it as a national security and supply chain risk. The AI company, maker of the Claude chatbot, is seeking a temporary halt to the designation in federal court.
During a hearing, a judge questioned the Department of Defense’s motivations for the labeling. Anthropic alleges the move is unlawful retaliation for refusing to relax AI safety restrictions for military applications.
The company claims the government’s actions are “unprecedented and stigmatizing” and could result in billions of dollars in lost revenue. Anthropic is suing to block the designation, arguing it unfairly impacts their business.