An AI that finds security flaws faster than every human hacker on Earth combined. An AI that’s already found thousands of them — in your operating system, your browser, your bank’s software.

That’s not a thought experiment. That’s Claude Mythos, and it’s forcing everyone from Fortune 500 CEOs to the Trump White House to completely rethink AI.

Thousands of Zero-Days, One Model

Claude Mythos Preview is Anthropic’s latest frontier model. It wasn’t built for cybersecurity — it’s general-purpose. But during internal testing, Anthropic discovered something startling: Mythos finds and exploits software vulnerabilities better than virtually any human alive.

The numbers are jaw-dropping. In pre-release testing, Mythos identified thousands of previously unknown zero-day vulnerabilities across every major operating system and browser. It developed working exploits on the first attempt in over 83% of cases. In one particularly dramatic example, it uncovered a 27-year-old vulnerability in OpenBSD — an operating system literally famous for its security hardening.

That bug survived decades of expert human review and millions of automated tests. Mythos found it like it was nothing.

Project Glasswing: The Controlled Detonation

Rather than release Mythos publicly, Anthropic launched Project Glasswing — a defensive coalition that reads like a tech all-star roster: AWS, Apple, Microsoft, Google, CrowdStrike, Palo Alto Networks, JPMorgan Chase, NVIDIA, Cisco, Broadcom, and the Linux Foundation.

These partners, plus roughly 40 organizations maintaining critical software infrastructure, get exclusive access to Mythos for defensive work. Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations.

The logic: give defenders a head start before comparable capabilities inevitably proliferate. CEO Dario Amodei warned of a six- to twelve-month window to patch tens of thousands of vulnerabilities before Chinese AI labs develop comparable tools.

Mozilla already put Mythos to work, finding 271 vulnerabilities in Firefox — including high-severity bugs dormant for over a decade. All patched. Permanently removed from attackers’ arsenals.

The most significant ripple is political. The Washington Post reported that Mythos has begun to crack the Trump administration’s hard-line stance on promoting AI without guardrails. Top officials are confronting the reality that a purely hands-off regulatory approach doesn’t cut it when AI can find critical infrastructure flaws at scale.

This is a remarkable reversal. The same White House that aggressively rolled back AI regulations is now considering new government oversight over future models. Microsoft, Google, and xAI have agreed to let the government test their AI models before launch through NIST.

But Wait — Is Mythos Actually Special?

Here’s the plot twist: maybe not.

Cybersecurity experts are pushing back hard. Ben Harris, CEO of watchTowr, told CNBC that “people are able to reproduce the vulnerabilities found with Mythos through clever orchestration of public models.” Klaudia Kloc of Vidoc went further — the capability to detect zero-days at scale has existed “for a couple of months, if not a year.”

The UK’s AI Security Institute found that OpenAI’s GPT-5.5, already generally available, has comparable vulnerability-finding capabilities. OpenAI responded this week with GPT-5.5-Cyber, a model specifically tailored for cybersecurity with limited access for vetted teams.

Bruce Schneier raised the sharpest question in The Guardian: is Anthropic making a virtue out of necessity? Mythos is reportedly very expensive to run, and Anthropic may not have the resources for a general release. “What better way to juice the company’s valuation than to hint at capabilities but not prove them?”

The Real Problem: Discovery Isn’t Remediation

Set aside the uniqueness debate. There’s a more fundamental issue every security team should lose sleep over: finding vulnerabilities is the easy part.

After discovery, you need to understand the business context, determine if the vulnerability is reachable in production, prioritize it against thousands of other findings, route it to the right developer, track the fix, and verify the patch doesn’t introduce new problems.

Now multiply that workflow by thousands. We’re looking at a vulnerability tsunami where AI discovers flaws orders of magnitude faster than organizations can fix them.

And here’s the uncomfortable truth: lots of systems aren’t patchable. Legacy software, embedded systems, IoT devices, critical infrastructure running decades-old code — these don’t get convenient security updates. Many Mythos-discovered vulnerabilities will stick around for years.

Offense vs. Defense in the AI Era

Short term, this makes the world more dangerous. Finding and exploiting vulnerabilities is easier than finding and fixing them. There will be a messy, dangerous transition period where AI-enhanced attackers have real advantages.

But the endgame likely favors defenders. AI that automatically finds and fixes vulnerabilities during development means fundamentally more secure software. The same capabilities that make Mythos terrifying today will be baked into every IDE and CI/CD pipeline tomorrow. Future software won’t ship with 27-year-old bugs because AI will catch them before the first commit.

The question isn’t whether AI will transform cybersecurity — it already has. The question is whether we survive the transition with critical infrastructure intact.

Anthropic is betting that a controlled, defender-first rollout gives the good guys enough of a head start. The skeptics say the genie is already out of the bottle.

Both sides might be right. And that’s what makes this the most consequential AI story of 2026 so far.