Unauthorized users gain access to Anthropic's Mythos AI model, report claims

technology artificial intelligence cybersecurity

Anthropic is investigating reports that a small group of unauthorized users gained access to its Mythos AI model. The technology has not been released to the public because the company considers it too sensitive to release, warning that its ability to detect cybersecurity vulnerabilities could enable dangerous cyberattacks.

The unauthorized users reportedly accessed the Claude Mythos Preview by guessing the model's URL on the same day Anthropic announced Project Glasswing. Communicating through a private Discord channel, the group has since used the model regularly, though not for cybersecurity purposes.

Anthropic confirmed it is investigating the claims but maintains that there is no evidence its core systems have been impacted. The incident highlights the risks associated with restricting access to frontier AI capabilities.

Hackers breach Anthropic's 'too dangerous to release' Mythos AI model, report

euronews.com

Anthropic investigates report of rogue access to hack-enabling Mythos AI

theguardian.com

Unauthorized users gained access to Anthropic’s restricted Mythos AI model on launch day via a third-party contractor’s environment

thenextweb.com

Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims

techcrunch.com

Anthropic's Mythos AI model accessed by unauthorised users, Bloomberg News reports

straitstimes.com

Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports

reuters.com

Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users

bloomberg.com