4 min readNew DelhiApr 8, 2026 12:18 PM IST
Artificial intelligence company Anthropic has unveiled a preview of what it says is the most powerful model it has ever built — an advanced system called Claude Mythos — but unlike typical AI launches, the company is deliberately keeping the technology away from the public for now.
The model, which represents a major leap in capability over Anthropic’s existing AI systems, has drawn global attention because of its ability to autonomously identify serious vulnerabilities in widely used software and infrastructure. Early testing suggests the system can detect security flaws across operating systems, browsers and enterprise software at a pace far beyond most human researchers.
Instead of a full commercial rollout, Anthropic has opted for a limited preview under a new cybersecurity initiative called “Project Glasswing”, through which the model will be made available only to a small group of technology companies, infrastructure operators and security organisations. The move reflects growing concern in the AI industry that frontier models could dramatically accelerate cyberattacks if deployed without safeguards.
The decision also highlights a broader dilemma confronting the AI sector: the same systems that can strengthen digital defences can also become powerful tools for hackers. As AI models become more autonomous and capable of reasoning through complex technical problems, experts warn that the line between defensive and offensive cyber capabilities is rapidly blurring.
Anthropic’s most powerful model yet
Mythos is part of a new generation of large AI systems that sit above Anthropic’s existing flagship models in terms of reasoning, coding ability and problem-solving. Internally described as a “step change” in capability, the system is designed to analyse software, understand complex codebases and identify security weaknesses with minimal human supervision.
Early experiments have already demonstrated its potential impact. The model was able to discover thousands of vulnerabilities across major operating systems and web browsers, including some that had remained undetected for decades.
The system’s efficiency is also striking. Anthropic researchers say the model is roughly an order of magnitude faster than previous tools in identifying security bugs, significantly compressing the time between vulnerability discovery and exploitation.
Story continues below this ad
Because of these capabilities, Anthropic has decided not to release Mythos publicly. Instead, it is giving access only to a limited group of partners such as major technology firms and cybersecurity organisations that manage critical infrastructure.
The goal is to allow defenders to use the technology to identify and patch vulnerabilities before similar AI capabilities become widely available. Anthropic has also been engaging with government agencies and industry groups to assess the risks of deploying such powerful systems.
The broader cybersecurity risks of advanced AI
The cautious rollout of Mythos underscores a growing fear among security experts: that advanced AI could soon transform cyber warfare and digital crime.
Traditionally, identifying software vulnerabilities required specialised expertise and weeks or months of manual analysis. Powerful AI models can automate much of this process, enabling attackers to scan vast codebases and launch multiple hacking campaigns simultaneously.
Story continues below this ad
Researchers warn that AI-driven tools could dramatically increase the scale and speed of cyberattacks. Autonomous AI agents capable of reasoning and writing code can potentially identify flaws, generate exploits and deploy attacks with minimal human input.
This shift could also lower the technical barrier for cybercrime. Models capable of generating working exploits may allow individuals with limited cybersecurity training to launch sophisticated attacks.
At the same time, organisations are increasingly integrating AI agents into internal workflows, sometimes connecting them to corporate systems and databases. Security experts caution that poorly configured AI tools could unintentionally create new entry points for hackers.
© The Indian Express Pvt Ltd



