Ask Onix
Anthropic investigates alleged breach of powerful cyber-security tool
Anthropic is examining claims that a small group obtained unauthorized access to its closely guarded Claude Mythos model, a cyber-security AI tool the company has withheld from public release due to its advanced capabilities.
How the alleged access occurred
The company stated it is looking into reports that users in a private online forum managed to use the model without proper authorization, potentially through a third-party vendor's environment.
"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," Anthropic said in a statement.
No evidence of malicious intent or system compromise
Anthropic has not found signs that its systems were breached or that the model was obtained by malicious actors. However, the incident has raised concerns about the ability of AI firms to prevent their most powerful tools from falling into unintended hands.
Raluca Saceanu, CEO of cyber-security firm Smarttech247, suggested the access was likely the result of misuse rather than a traditional hack. According to Bloomberg, the individual who gained access already had permission to view Anthropic's AI models through work with a third-party contractor.
Potential risks of uncontrolled access
The group reportedly used the model after gaining access but avoided hacking activities to evade detection. Saceanu warned that unauthorized use of such tools could enable fraud, cyber abuse, or other harmful activities.
"When powerful AI tools are accessed or used outside their intended controls, the risk is not just a security incident but the spread of capabilities that could be used for fraud, cyber abuse, or other malicious activity," she said.
Raluca Saceanu, CEO of Smarttech247
UK cyber officials urge balanced approach to AI security
At the CyberUK conference in the UK, National Cyber Security Centre (NCSC) chief Richard Horne emphasized that advanced AI tools could enhance security if properly safeguarded. He encouraged organizations to focus on fundamental cyber-security practices rather than fearing new AI-driven threats.
"As we have seen in the media in recent days, frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cyber-security are still to be addressed," Horne said.
Government calls for collaboration with AI firms
Security Minister Dan Jarvis urged AI companies to work with the UK government on the "generational endeavor" of ensuring AI is used to protect critical networks. The UK relies on foreign-developed models like Mythos, as the most advanced AI systems are created outside the country, primarily in the US and China.
The NCSC also highlighted the growing threat of nation-state and hacktivist attacks, particularly from Russia and China, noting that cyber-security has become a critical component of modern defense strategies.