Remember when the Trump administration called AI a “beautiful baby” that shouldn’t be restrained by “foolish rules”? That was July 2025. Ten months later, the same White House is drawing up plans to vet AI models before they reach the public.
The policy whiplash is real. And it has a name: Mythos.
One Model Changed the Entire Conversation
Anthropic introduced Mythos Preview in April 2026 — then refused to release it publicly. The model demonstrated an unprecedented ability to discover high-severity security vulnerabilities across every major operating system and web browser, outperforming all but the most elite human hackers.
This wasn’t marketing theater. Anthropic backed it up with Project Glasswing, committing $100 million in usage credits to vetted cybersecurity firms for defensive purposes. The message: too dangerous to release broadly, too powerful not to use.
CEO Dario Amodei called it a cybersecurity “watershed moment.” CISA, the Commerce Department, and other agencies have been getting briefings ever since.
From “Kill All Regulation” to FDA Comparisons
The reversal is staggering when you trace the timeline.
Day one back in office, January 2025: Trump revoked Biden’s Executive Order 14110, which had established mandatory red-teaming, cybersecurity protocols, and monitoring requirements for AI in critical infrastructure. Three days later, a new order dropped — “Removing Barriers to American Leadership in Artificial Intelligence.” Innovation first, regulation never.
VP Vance warned at the Paris AI Summit that “excessive regulation” could “kill a transformative industry.” AI czar David Sacks accused Anthropic of “regulatory capture” for even suggesting guardrails.
Then Mythos arrived.
According to the New York Times, Wall Street Journal, and Axios, the White House held meetings last week where officials briefed Anthropic, Google, and OpenAI on potential oversight procedures. National Economic Council Director Kevin Hassett told reporters the administration is studying an executive order modeled on the FDA’s approval process.
The FDA. From the administration that treated regulation as a four-letter word.
CAISI Is Already Doing the Work
While the executive order simmers, something concrete already happened. On May 5, NIST’s Center for AI Standards and Innovation (CAISI) announced agreements with Google DeepMind, Microsoft, and xAI to conduct pre-deployment evaluations of frontier models. They join OpenAI and Anthropic, who signed similar deals during the Biden era.
CAISI has quietly completed more than 40 evaluations of frontier models — including some never publicly released. Developers sometimes share versions with safety guardrails removed, giving CAISI a raw look at actual capabilities.
The catch: it’s all voluntary. The executive order would make it mandatory. That’s a fundamentally different proposition.
The Competitive Problem Nobody Wants to Talk About
Here’s where it gets interesting. OpenAI has developed its own comparable cyber-focused model, releasing it through a trusted-access program. As reported by Politico on May 7, OpenAI explicitly positioned it to challenge Mythos.
This creates a perverse incentive. If Anthropic restricts access to a dangerous model while competitors push theirs out more aggressively, responsible behavior becomes a competitive disadvantage. Government intervention could actually level the playing field — giving cautious companies cover without losing market share.
It’s a rare case where regulation might protect the responsible actor.
The Political Ground Shifted Under Everyone’s Feet
David Sacks, the deregulation champion, left his White House post in March. Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent now run the AI agenda with what sources describe as a “more interventionist approach.”
The public mood matches. Pew Research found 50% of Republicans and 51% of Democrats are more concerned than excited about AI’s growing role. Bipartisan worry in 2026 America is practically an endangered species — and here it is, alive and well.
What This Actually Means
For the AI industry, the implications are concrete:
For developers: The “move fast and break things” era may be ending — at least for frontier models. Plan for review processes that add weeks or months to deployment timelines.
For cybersecurity teams: The tools on both sides are about to get dramatically more powerful. The defensive applications of Mythos-class models could be transformative, but so could the offensive ones in the wrong hands.
For businesses: Start planning for a world where your AI vendor’s latest model might need government sign-off before you can use it.
The Real Question
The executive order is reportedly still in early stages. A White House official called the reporting “speculation.” But the direction is unmistakable.
The real question isn’t whether AI vetting happens — it’s what shape it takes. Light-touch voluntary framework? Rigorous FDA-style approval? Something in between? The answer will define the industry for years.
When a company builds something so powerful it scares itself, and the most pro-business administration in memory responds by reaching for the regulatory playbook, we’ve entered genuinely new territory.
The “beautiful baby” is growing up. Turns out, even its biggest fans want a babysitter.