A federal judge in California has temporarily blocked the Department of Defense’s blacklisting of Anthropic, the company behind the Claude AI model. The ruling pauses punitive measures taken against the firm following a dispute with the Pentagon.
The case centers on Anthropic’s refusal to allow its AI technology to be used in autonomous weapons systems. The company argued that the Trump administration violated its first amendment rights by declaring it a supply chain risk and ordering agencies to cease using its technology, potentially costing billions in revenue.
The judge stated the government’s actions “appear designed to punish Anthropic” and halted enforcement of a Trump administration directive ordering federal agencies to stop using Anthropic’s services. The court has ordered the administration to rescind recent restrictions placed on the AI company.
The northern district court of California will continue to hear Anthropic’s case, marking an early victory for the company in its legal battle with the Department of Justice.