Yesterday we wrote about the White House considering pre-release AI oversight. Today, it’s happening.

The Center for AI Standards and Innovation — CAISI, sitting under the Department of Commerce — has announced formal agreements with Google DeepMind, Microsoft, and Elon Musk’s xAI. The deals give the government access to conduct pre-deployment evaluations of their most powerful AI models. Combined with renegotiated agreements with OpenAI and Anthropic from 2024, every major American AI lab is now operating under some form of government review.

Let that sink in. Under the most deregulation-friendly administration in recent memory.

What CAISI Actually Gets

The agreements authorize “pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.” In plain English: the government gets early access to frontier models before they ship.

Crucially, this isn’t a veto. CAISI can evaluate, but it can’t block a release. Think of it as a government test drive — they kick the tires, flag concerns, and the companies decide what to do with the feedback.

For now.

The Leadership Vacuum That Made This Possible

Timing matters here. David Sacks — the administration’s former “AI czar” and champion of the let-them-cook philosophy — left his role in March 2026. That created a vacuum. Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent reportedly stepped in, and they brought a decidedly less libertarian approach to AI governance.

Without Sacks pushing back, the national security hawks in the administration gained ground. And they had the perfect ammunition: Anthropic’s Mythos model, which demonstrated the ability to discover thousands of critical zero-day vulnerabilities across major operating systems. When a company voluntarily restricts its own product because it’s too dangerous, even committed deregulators start asking questions.

Voluntary Today, Mandatory Tomorrow?

Here’s where it gets interesting. Behind the CAISI agreements, the administration is reportedly drafting an executive order that could formalize and expand this oversight. A working group bringing together tech executives and government officials is under discussion. The NSA, Office of the National Cyber Director, and Director of National Intelligence could all play roles in model reviews.

A White House official insists talk of an executive order is “speculation.” But the infrastructure is being built in plain sight.

Dean Ball, a former senior adviser on AI in the Trump administration, described the challenge as a “tricky balance” between keeping pace with the technology and avoiding overregulation. That’s the diplomatic version. The blunt version: the administration is trying to build a regulatory apparatus without admitting it’s regulating.

The Industry Split

The reaction breaks along predictable lines.

Daniel Castro of the Information Technology and Innovation Foundation called it a “terrible idea” and a “full embrace of the precautionary principle.” His argument: innovation moves at the speed of Silicon Valley, not Washington. Every product launch, feature update, and model release slows down the moment bureaucrats are in the loop.

Adam Thierer of the R Street Institute warned that pre-release vetting could become “a de facto licensing regime” — and that doing it through executive orders rather than legislation sets a dangerous precedent.

On the other side, Janet Vestal Kelly of the Alliance for a Better Future called it “welcome news,” arguing that “left to their own devices, Big Tech companies will run roughshod over kids, workers, and American values.”

And then there’s the uncomfortable middle ground: some analysts question whether Mythos’s capabilities truly justify the panic, noting that cheaper models can achieve comparable results in vulnerability discovery. If this entire policy shift is built on an exaggerated threat, the foundation gets shaky fast.

What This Actually Changes

For the big labs: Not much immediately. Google, Microsoft, xAI, OpenAI, and Anthropic are all already sharing models with the government. The CAISI agreements formalize what was happening informally. The real question is speed — how long do government evaluations take, and do they create a de facto delay on releases?

For open-source AI: This is the quiet concern. A review process designed for frontier models from billion-dollar labs could create barriers that smaller developers and open-source projects can’t navigate. Nobody’s talking about this yet. They should be.

For the US-China race: Complicated. Proponents say vetting prevents dangerous capabilities from proliferating. Critics say it hands China a speed advantage. The recent Supermicro smuggling scandal — $2.5 billion in Nvidia-chipped servers allegedly funneled to China — suggests the competition has already moved beyond what any review process can contain.

The Real Story

Strip away the politics and what you’re left with is this: AI capabilities have reached a point where even an administration ideologically committed to deregulation can’t ignore the national security implications.

This isn’t really a policy reversal. It’s reality winning an argument with ideology. It happens slower than it should, but it always happens eventually.

The interesting question going forward isn’t whether there will be oversight — that’s settled. It’s whether Washington can build a review process that’s fast enough to matter, rigorous enough to catch real threats, and durable enough to survive the next election cycle.

History suggests two out of three is optimistic. But two out of three might be enough.