An AI just found security holes that human hackers missed for 27 years. And the company that built it says you can’t have it.
This week, Anthropic did something the AI industry almost never does: it announced its most powerful model ever and simultaneously refused to release it. Claude Mythos Preview isn’t locked behind a paywall or a waitlist. It’s locked behind a vault door, with access restricted to a handpicked coalition of tech giants scrambling to patch the internet before someone else builds something similar.
The reason? Mythos autonomously discovered thousands of zero-day vulnerabilities across every major operating system and browser. Some of these bugs have been hiding in plain sight for decades. It then figured out how to exploit them — no human guidance required.
When the Fed Chair and Treasury Secretary pull Wall Street CEOs into an emergency meeting over an AI model, you know this isn’t marketing theater.
What Makes Mythos Different
Claude Mythos Preview isn’t a cybersecurity tool. It’s a general-purpose frontier model — Anthropic’s biggest and most capable ever — that happens to be devastatingly good at reading code and finding weaknesses.
The numbers are staggering. Thousands of previously unknown vulnerabilities across Windows, macOS, Linux, Chrome, Firefox, Safari. One bug it found in OpenBSD was 27 years old. Not a theoretical vulnerability. A working exploit, developed autonomously by the model.
“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” Anthropic stated. Coming from a company known for measured language, that sentence lands like a brick.
The critical detail: Mythos didn’t need anyone pointing it at suspicious code. It scanned entire codebases, identified weak points buried in millions of lines, and weaponized them. Previous AI models could analyze code snippets you fed them. Mythos hunts on its own.
Project Glasswing: $100 Million to Patch the Internet
Instead of shipping Mythos to customers, Anthropic launched Project Glasswing — a defensive coalition of 12 core partners including Apple, Microsoft, Google, AWS, NVIDIA, CrowdStrike, JPMorgan Chase, and the Linux Foundation. Another 40+ organizations will get access to scan critical infrastructure.
Anthropic is committing up to $100 million in usage credits plus $4 million in direct donations to open-source security groups. The logic is simple: find and fix the bugs before attackers develop similar capabilities.
CEO Dario Amodei framed it as a race against time. “The dangers of getting this wrong are obvious,” he wrote, “but if we get it right, there is a real opportunity to create a fundamentally more secure internet.”
It’s a compelling pitch. It also means Anthropic is betting the entire internet’s security on the assumption that it can patch faster than adversaries can build.
The Emergency Meeting That Tells You Everything
The same day Mythos launched, Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convened an unplanned meeting with the CEOs of Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo. The bank executives were already in D.C. for a Financial Services Forum board meeting when they got the call to come to Treasury.
Let that sink in. The two most powerful financial regulators in America pulled a surprise briefing about an AI model’s capabilities. That doesn’t happen for hype cycles. That happens when people with access to classified intelligence are genuinely rattled.
Banks are especially exposed here. The financial system runs on software — old software, often decades old — and a tool that can autonomously find exploitable vulnerabilities in legacy code is essentially a skeleton key to global finance.
The Catch: It Might Already Be Too Late
Here’s where it gets uncomfortable. Cybersecurity researchers at AI security firm AISLE demonstrated that several vulnerabilities Anthropic highlighted could be detected by freely available open-source models. The difference? Researchers had to know which code segments to examine. They couldn’t just unleash a smaller model on an entire codebase the way Mythos can.
That gap is closing fast. Charlie Eriksen of Aikido Security put it bluntly: “Anybody with a computer can develop very powerful offensive cyber capabilities in a short amount of time, without needing a lot of expertise in cybersecurity.”
OpenAI is reportedly preparing its own cybersecurity-focused model, internally called “Spud,” with a similarly cautious rollout. The capability is proliferating whether anyone wants it to or not. Anthropic may be buying time with Project Glasswing, but the clock is already ticking.
Politics Making a Bad Situation Worse
In a rational timeline, the U.S. government would be working directly with Anthropic to harden federal systems. Instead, they’re in court.
The Trump administration labeled Anthropic a “supply chain risk” — reportedly retaliating after the company refused to let its AI be used for autonomous targeting or mass surveillance of Americans. President Trump has publicly called Anthropic a “radical left, woke company.” A federal appeals court this week denied Anthropic’s request to block the Pentagon blacklisting.
So the company that just discovered thousands of critical vulnerabilities in the world’s most important software is simultaneously banned from working with the Department of Defense. The DOD has reportedly continued using Claude during the Iran conflict anyway, but the official hostility means some of the government’s most vulnerable systems — running on the most outdated software — may be the last to get patched.
You can’t make this stuff up.
What This Actually Means for You
Every piece of software you use daily — your browser, your OS, your banking app — likely contains vulnerabilities that models like Mythos can find. The question isn’t whether AI-driven cyberattacks increase. It’s whether defenses can keep pace.
For businesses:
- Patch cycles need to compress dramatically. AI finds bugs in hours that humans missed for years.
- Open-source dependencies are now critical attack surface. Audit your stack.
- AI-powered defense isn’t optional anymore. If attackers have AI, you need it too.
- Cybersecurity budgets that looked generous last quarter are probably insufficient today.
For everyone else: Update your software. Use unique passwords. Enable multi-factor authentication everywhere. Assume any system you use could be compromised.
The Arms Race Is Here
Anthropic’s Mythos moment crystallizes something the industry has been dancing around: AI models are now better than most humans at hacking. Not theoretically. Demonstrably.
The optimistic read is that defenders have a genuine window. Every vulnerability Glasswing patches is one less weapon available to attackers. The pessimistic read is that this race is fundamentally asymmetric. Defenders need to find and fix every hole. Attackers only need one.
We’re entering an era where internet security depends on a handful of AI companies making responsible deployment choices — and on governments being smart enough to cooperate rather than wage political wars with the people trying to help.
Based on this week’s evidence, I wouldn’t bet on the second part.