When Vice President JD Vance, Treasury Secretary Scott Bessent, and Fed Chair Jerome Powell all clear their schedules in the same week to discuss a single AI model, something fundamental has shifted.
That’s exactly what happened around Anthropic’s limited release of Claude Mythos Preview. The U.S. government treated an AI model launch like a national security event — because it is one.
Two Emergency Meetings, One Week
The first gathering pulled together the heaviest hitters in tech: Anthropic CEO Dario Amodei, Elon Musk, Sundar Pichai, Sam Altman, Satya Nadella, plus the heads of CrowdStrike and Palo Alto Networks. The agenda wasn’t benchmarks or market share. It was scenario-planning for a world where AI tips the cybersecurity balance toward attackers.
Then came meeting number two. Bessent and Powell called a surprise session with the CEOs of Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo. These bank chiefs were already in D.C. for a Financial Services Forum board meeting when Treasury interrupted their dinner to brief them on the threat Mythos poses to the financial system.
When the Fed Chair crashes your dinner plans to talk about an AI model, the threat assessment is not theoretical.
Why Mythos Spooked Everyone
Anthropic didn’t build Mythos to be a hacking tool. They trained it to be exceptional at code. But as Dario Amodei put it: “As a side effect of being good at code, it’s also good at cyber.”
That’s an understatement. In weeks of testing, Mythos identified thousands of vulnerabilities across websites, applications, and every major operating system and browser in use today. One highlight: it found a 16-year-old vulnerability in FFmpeg — a line of code scanned more than 5 million times by traditional security tools without being caught.
The benchmark gap tells the story. Mythos solved 93.9% of SWE-bench Verified problems versus Opus 4.6’s 80.8%. On the harder SWE-bench Pro: 77.8% versus 53.4%. This isn’t incremental. It’s a capability jump.
The part that really spooked officials: Mythos doesn’t just find vulnerabilities. It immediately develops sophisticated exploits for them. Anthropic’s frontier red team lead told WIRED the model can accomplish things “that a senior security researcher would be able to accomplish.” In the wrong hands, that’s a weapon.
Project Glasswing: 45 Companies, $100 Million
Instead of quietly releasing Mythos into the wild, Anthropic did something unprecedented. They convened Project Glasswing — a defensive consortium of more than 45 organizations including Apple, Google, Microsoft, AWS, Nvidia, Cisco, CrowdStrike, JPMorgan Chase, and the Linux Foundation.
Anthropic is backing it with $100 million in free API credits so partners can scan their own systems without cost barriers. The concept borrows from coordinated vulnerability disclosure, except the “vulnerability” is every piece of software on the planet.
“Many of the assumptions that we’ve built the modern security paradigms on might break,” said Logan Graham, Anthropic’s red team lead. Microsoft’s global CISO acknowledged it would help “identify and mitigate risk early.” Cisco’s security chief called it “a profound shift” and “a clear signal that the old ways of hardening systems are no longer sufficient.”
The Political Paradox
Here’s where it gets genuinely bizarre.
The same government that summoned tech CEOs to discuss Mythos is also actively trying to destroy Anthropic’s government business. Trump and Defense Secretary Pete Hegseth attacked Anthropic for refusing to let Claude power autonomous weapons. The DOD labeled Anthropic a “supply chain risk,” essentially blacklisting them. Trump ordered federal departments to halt use of Anthropic’s platforms.
Yet when Mythos emerged, it was Vance and Bessent who called Anthropic to the table alongside competitors. The government wants to punish Anthropic for being too cautious with military AI while simultaneously depending on that same caution to protect national infrastructure.
The DOD has even continued using Claude during the Iran conflict despite the official ban. Federal appeals courts are issuing contradictory rulings. It’s a mess — and a perfect illustration of how AI governance has outpaced the institutions trying to manage it.
What This Changes
The “too powerful to release” model is now real. Critics dismissed the idea for years. When a model finds exploits that 5 million automated scans missed, the risk calculus changes permanently.
Industry self-regulation just got a serious test case. Project Glasswing is the AI industry policing itself at Anthropic’s invitation. If it works, it becomes a template. If it doesn’t, expect government mandates.
The cybersecurity offense-defense balance is shifting. Today it’s Mythos. In 6-12 months, similar capabilities will exist across multiple models from multiple companies. The window for building defenses is shrinking fast.
AI governance is now a whole-of-government concern. Treasury, the Fed, the White House, CISA, the Pentagon — all engaging with a single model release. That level of cross-agency coordination doesn’t happen for software. It happens for crises.
The Bottom Line
An AI company built something so powerful that the Vice President, Treasury Secretary, and Fed Chair all had to get involved before it could reach even a curated group of partners. The biggest competitors in tech set aside their rivalries to join a defensive consortium. And the whole thing is happening while the government simultaneously tries to kneecap the company that built it.
We’re not debating whether AI models will reshape cybersecurity anymore. Mythos already proved they will. The only question is whether we can build defenses fast enough to keep up.