The same administration that revoked Biden’s AI executive order on Day One is now considering something arguably more interventionist: mandatory government review of AI models before they hit the public.
If that sounds like whiplash, buckle up.
What’s on the Table
According to the New York Times, Reuters, and Bloomberg, the White House is discussing an executive order that would create a formal review process for new AI models before release. The NSA, Office of the National Cyber Director, and Director of National Intelligence would oversee evaluations — granting the government early access to frontier models without necessarily blocking their deployment.
Think evaluation, not veto power. At least for now.
The administration has already briefed executives from Anthropic, Google, and OpenAI. A White House official called the reporting “speculation,” noting that “any policy announcement will come directly from the president.”
Anthropic’s Mythos Broke the Ideological Dam
The catalyst has a name: Mythos. Anthropic’s model demonstrated the ability to identify thousands of critical software vulnerabilities — including zero-days across major operating systems and browsers. The company deemed it too dangerous for unrestricted release. Cybersecurity experts agreed.
That got Washington’s attention in a way that abstract AI safety arguments never could. The NSA has already used Mythos to assess vulnerabilities in government Microsoft deployments. When a single AI model can map the attack surface of critical infrastructure, national security concerns override deregulation instincts.
The timing matters too. David Sacks, the administration’s “AI czar” and chief deregulation evangelist, left in March. Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent have since taken the wheel — and they’re less ideologically wedded to the pure hands-off playbook.
The 180-Degree Turn
Let’s spell out the magnitude of this reversal:
January 2025: Trump revokes Biden’s AI order requiring developers to share safety test results with the government before release. The message: America wins by getting out of the way.
July 2025: AI blueprint loosens environmental rules and expands exports to allies. Full speed ahead.
May 2026: The same administration discusses mandatory pre-release government review of frontier models.
Sixteen months from “move fast, break nothing” to “actually, let us look at that first.”
The Obvious Problem Nobody’s Solving
Critics raise a point that should make everyone uncomfortable: in an administration known for rewarding loyalty and punishing critics, who decides what’s “too dangerous”?
As former NYU chief AI architect Conor Grennan put it: “What if an AI leader criticizes Trump — is that model suddenly ’too dangerous?'”
That’s not paranoia. It’s the inherent risk of giving any executive branch discretionary veto power over technology releases. The politicization potential is enormous.
On the other side, the Information Technology and Innovation Foundation called it a “terrible idea” that would mean “firms need government permission to innovate.” The R Street Institute warned it amounts to “a de facto licensing regime” that shouldn’t exist via executive order alone.
What This Actually Tells Us
Strip away the politics and one fact remains: AI capabilities have advanced to the point where the most pro-business, anti-regulation administration in modern history feels compelled to act.
When Chinese labs ship four competitive frontier models in twelve days. When a single model can map zero-days across critical infrastructure. When the gap between “impressive demo” and “genuine national security threat” narrows to nothing — ideology bends to reality.
The administration is learning what the safety community has argued for years: the choice isn’t between innovation and safety. It’s between planned governance and reactive governance after something goes catastrophically wrong.
What Happens Next
Nothing’s final. The executive order could drop this week or die in committee. The tech lobby is mobilizing. Congressional Republicans face awkward questions about consistency.
But the signal matters more than the specifics. If this administration — the one that tore up AI guardrails on Day One — now believes pre-release review is worth discussing, we’ve crossed a threshold that doesn’t uncross.
The question was never whether AI oversight arrives. It’s whether it arrives as thoughtful policy or as a panicked reaction to whatever comes after Mythos.
Given Washington’s track record, don’t bet on thoughtful.