An AI that finds security holes in every major operating system and web browser on Earth — then writes working exploits to hack them. The company that built it looks at what they’ve created and says: “We’re not releasing this.”
Anthropic unveiled Claude Mythos Preview this week and immediately announced it would not be publicly available. Instead, through a new initiative called Project Glasswing, Mythos is being shared exclusively with about 45 organizations including Apple, Microsoft, Google, Amazon, Cisco, and the Linux Foundation. The mission: find and fix vulnerabilities before similar capabilities land in less responsible hands.
This isn’t marketing theater. When the U.S. Treasury Secretary and Federal Reserve Chair urgently summon Wall Street CEOs to discuss a single AI model, something genuinely new is happening.
Why Mythos Is a Different Animal
AI models helping find software bugs isn’t new. What makes Mythos an inflection point is the leap from finding weaknesses to autonomously exploiting them.
The numbers tell the story. On Anthropic’s Firefox benchmark, their previous top model turned discovered vulnerabilities into working exploits twice in several hundred attempts. Mythos did it 181 times, with 29 additional cases of register control. That’s not incremental improvement — it’s a different category of capability.
In one documented case, Mythos inspected FreeBSD’s Network File System server, found a 17-year-old remote code execution vulnerability, and produced a working exploit granting root access to an unauthenticated attacker. From a single prompt.
The oldest vulnerability Mythos uncovered was 27 years old, hiding in critical security infrastructure that millions of people rely on without knowing it exists.
The Terrifying Part: Nobody Trained It for This
Anthropic says it did not explicitly train Mythos for cybersecurity. The offensive capabilities emerged as a natural consequence of improvements in coding, reasoning, planning, and autonomous tool use.
The same skills that make an AI better at writing code also make it better at breaking into systems. Finding vulnerabilities and writing exploits are fundamentally the same underlying capability as understanding and producing code — just pointed in a different direction.
“I typically am very skeptical of these things, and the open source community tends to be very skeptical, but I do fundamentally feel like this is a real threat,” said Alex Zenla, CTO of cloud security firm Edera.
Security experts highlight Mythos’s ability to identify exploit chains — sequences of vulnerabilities exploited together to deeply compromise targets. These are the building blocks of zero-click attacks that compromise systems without user interaction. Previously, constructing these chains required elite-level human expertise. Mythos automates much of that process.
Project Glasswing: Racing the Clock
Anthropic committed up to $100 million in usage credits to support Project Glasswing. Partners include Cisco, Broadcom, CrowdStrike, and the Linux Foundation alongside the big tech names.
“In the long run, you want to make sure that your defenses are machine-scale, because the attacks are machine-scale,” said Cisco’s Jeetu Patel.
The math is simple but terrifying: Anthropic estimates it’s only months before other labs produce models with similar capabilities. The window for defenders to patch critical vulnerabilities is narrow and closing.
Wall Street Gets the Emergency Call
Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convened an urgent meeting with Wall Street’s top bank CEOs at Treasury headquarters. Goldman Sachs’ David Solomon, Bank of America’s Brian Moynihan, Citigroup’s Jane Fraser, Morgan Stanley’s Ted Pick, and Wells Fargo’s Charlie Scharf all attended.
Government officials don’t typically convene emergency meetings over a single AI model. The fact that they did tells you everything about the gravity of what’s happening behind closed doors.
In his annual shareholder letter, JPMorgan’s Jamie Dimon warned that cybersecurity “remains one of our biggest risks” and that “AI will almost surely make this risk worse.”
The Political Blind Spot
The Trump administration designated Anthropic as a supply chain risk after the company refused to let the military use its tools for mass surveillance. Officials have called Anthropic a “radical left, woke company” and banned government agencies from using its technology.
That hostility creates a dangerous problem. U.S. government systems — notoriously outdated and vulnerable — are among the most important to secure against AI-powered attacks. But the current political dynamic makes cooperation between the government and the company best positioned to help nearly impossible.
The Threshold We Just Crossed
Whether Mythos is slightly overhyped or exactly as dangerous as claimed, we’ve crossed a meaningful threshold. The skill level required to find and exploit serious vulnerabilities has dropped precipitously — and it’s only going to keep dropping.
As security consultant Davi Ottenheimer put it: “It’s a shift, like learning how to fight with machine guns when others are still using bolt-action rifles.” The transition from bolt-action rifles to machine guns fundamentally changed warfare. This transition will fundamentally change cybersecurity.
Anthropic chose transparency and defensive deployment over profit maximization. But they can’t hold back the tide alone. The model that keeps me up at night isn’t Mythos — it’s the one that comes after it, from a company with fewer scruples and no Project Glasswing.
That’s the one we really need to worry about.