When an AI company says its own model is too dangerous to release, you’d be forgiven for rolling your eyes. We’ve heard the script before. But when the Bank of England governor calls it “a very serious challenge for all of us” and Canada’s finance minister compares it unfavorably to the Strait of Hormuz, the script just changed.
Claude Mythos — Anthropic’s latest and most controversial AI model — has discovered thousands of high-severity vulnerabilities across every major operating system and web browser. One bug had been sitting undetected for 27 years. And this week, it hijacked the agenda at the IMF spring meetings in Washington.
An AI That Breaks Things Better Than Humans
Mythos isn’t a chatbot upgrade. When Anthropic’s red team tested it in early April, they called it “strikingly capable at computer security tasks.” The model locates dormant bugs in decades-old code and suggests exploitation paths — with minimal human oversight.
“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” Anthropic stated. “The fallout — for economies, public safety, and national security — could be severe.”
That’s not marketing copy. That’s a company telling you its own product scares it.
Central Bankers Are Paying Attention
The real story isn’t the model — it’s the reaction rippling through global finance.
Canadian Finance Minister François-Philippe Champagne put it bluntly: “The difference is that the Strait of Hormuz — we know where it is and we know how large it is. The issue that we’re facing with Anthropic is that it’s the unknown, unknown.”
A finance minister just said an AI model is scarier than a chokepoint controlling 20% of the world’s oil. Because at least you can find Hormuz on a map.
Bank of England Governor Andrew Bailey is examining what Mythos means for cybercrime risk. ECB President Christine Lagarde admitted no governance framework exists for this. US Treasury Secretary Scott Bessent summoned CEOs of systemically important banks to Washington specifically to discuss Mythos.
These aren’t tech conferences. These are the people who manage global financial stability, and they’re scrambling.
Project Glasswing: Fix It Before It Breaks Everything
Anthropic’s response is genuinely novel. Rather than releasing Mythos publicly, they launched Project Glasswing — giving twelve major tech companies early access to use the model defensively. The list reads like a who’s who: AWS, Apple, Microsoft, Google, Nvidia, Broadcom, and CrowdStrike. More than 40 organizations managing critical infrastructure also got access.
The logic is clean: let defenders use the best offensive tool to patch their own systems before those capabilities inevitably spread.
CEO Dario Amodei offered to work with US government officials. UK banks should get access within the week. It’s responsible disclosure applied not to a single bug, but to an entire capability class.
The Skeptic’s Corner
The UK AI Safety Institute — the only independent evaluator so far — added important nuance. Mythos is powerful, but primarily against “systems with weak security posture.” They couldn’t confirm effectiveness against well-defended targets.
Finding bugs in poorly maintained legacy code is useful. Breaking into a hardened, actively monitored financial system is different. The cybersecurity community is still waiting for independent verification.
And there’s the hype question. OpenAI withheld GPT-2 in 2019 claiming it was too dangerous. That model now looks quaint. The AI industry has a documented pattern of using safety concerns as marketing. Anthropic’s safety commitment is real, but they’re not immune to the benefits of being the company with the model that’s “too powerful.”
What Actually Matters Here
For businesses: If you’re running outdated software — and you probably are somewhere — the window between a vulnerability existing and being weaponized is shrinking fast.
For cybersecurity: Threat and opportunity in one package. AI that exposes vulnerabilities might also be the best tool to fix them. Expect a surge in AI-powered defensive security.
For regulation: Lagarde’s admission that no governance framework exists is the headline. AI capabilities are outpacing the infrastructure meant to contain them. The IMF called cybersecurity “absolutely essential on the international agenda for the next few months” — optimistic given how slowly international policy moves.
For the AI industry: Financial sources told the BBC another prominent US AI company could release a similarly powerful model without the same safeguards. That’s the real nightmare — not a responsible company holding back, but an irresponsible one not bothering to check.
The Clock Is Ticking
What makes Mythos different from previous AI safety theater is specificity. This isn’t hypothetical superintelligence. It’s a concrete tool that found real vulnerabilities in real systems that billions of people depend on.
Anthropic’s approach — withhold, share with defenders, coordinate patching — is probably the most responsible path available. But uncomfortable questions loom. What happens when the next company doesn’t make that call? When these capabilities get open-sourced? When nation-states build their own versions?
Andrew Bailey nailed the regulatory dilemma: “If you go too early you risk missing the target and distorting the evolution, and if you go too late things can get out of control.”
We’re somewhere in the middle. And the clock doesn’t care about governance frameworks.