The White House did something Friday that Silicon Valley has been begging for since ChatGPT went viral: it published a legislative blueprint for a single, unified national AI policy.

The four-page document — the “National Policy Framework for Artificial Intelligence” — tells Congress to override state-level AI regulations, protect children online, streamline energy permitting for data centers, and safeguard free speech from AI-powered censorship.

It’s not a law. It’s a wishlist. But it’s the most concrete signal yet about what federal AI regulation might actually look like — and the implications are enormous.

Six Principles, One Clear Direction

The framework lays out six pillars the White House wants codified into law:

Federal preemption of state AI laws. The headline provision. States would retain traditional police powers — fraud, consumer protection, child safety — but lose the ability to independently regulate how AI models are developed or to penalize developers for how third parties use their models.

Child safety protections. Parental controls, features to combat exploitation and self-harm. This is the bipartisan olive branch — kids’ safety is one of the few issues both parties still agree on.

Energy and infrastructure streamlining. Data centers could generate their own power on-site, bypassing the permitting bottlenecks that have slowed AI infrastructure buildouts across the country.

Intellectual property protections. Addressing AI training data copyright — though details remain conspicuously thin.

Anti-censorship rules. Preventing AI systems from silencing “lawful political expression or dissent.” A recurring Trump administration theme, rooted in concerns about perceived liberal bias in AI outputs.

Workforce development. Educating Americans to be AI-proficient. Reads more like a talking point than a plan.

What’s missing is almost more revealing than what’s included.

The State-Level Chaos This Is Trying to Fix

Over the past two years, states have been racing to fill the federal vacuum. California pushed safety testing requirements on frontier models. New York mandated algorithmic auditing in hiring. Colorado enacted AI discrimination rules. Illinois jumped in. The result: a regulatory patchwork that’s becoming genuinely unworkable for companies shipping AI products nationwide.

“We need one national AI framework, not a 50-state patchwork,” said Michael Kratsios, director of the White House Office of Science and Technology Policy.

He’s not wrong about the problem. If you’re an AI startup shipping to all 50 states, navigating 50 different regulatory regimes is a nightmare. Big Tech can afford armies of compliance lawyers. Startups can’t. A unified framework theoretically levels the playing field.

But look at who’s cheering loudest.

Big Tech’s Biggest Win of the Year

This framework reads like it was drafted with industry input — because it was. David Sacks, the White House AI and crypto czar, has deep Silicon Valley ties. Kratsios previously served as CTO under Trump’s first term and has long championed light-touch tech regulation.

The provision that states “should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models” is the key line. It’s a liability shield. If someone uses a frontier model to generate deepfake revenge porn or orchestrate a scam, the framework says the developer shouldn’t be on the hook.

That’s a massive win for OpenAI, Anthropic, Google, and Meta. And a massive concern for consumer advocates who argue that without liability, there’s no incentive to build meaningful safety guardrails.

We’ve seen this movie before. Social media grew unchecked for a decade before anyone seriously discussed regulation. By then, the industry was too entrenched to meaningfully rein in.

Can It Survive Congress?

Republican House leaders endorsed the framework immediately. Speaker Johnson and Majority Leader Scalise called it a “roadmap” providing “much-needed certainty.”

But actually passing it? Different story. Republican lawmakers tried twice last year to include a 10-year moratorium on state AI laws in legislation. Failed both times. The Senate remains a graveyard for ambitious tech bills. And Trump has been pushing the GOP to prioritize his voter-ID bill above all else ahead of November midterms.

Democrats may back the child safety components but resist wholesale preemption — particularly progressives from states like California that have been leading on AI oversight.

This sets up what Axios rightly called “a renewed clash with states and Congress over the future of AI regulation.” It’s going to be messy.

The Glaring Gaps

For a “comprehensive” framework, the holes are hard to miss:

National security barely registers. Remarkable, given the ongoing AI chip export saga with China, the Pentagon’s AI targeting controversies, and months of Anthropic-military headlines. The administration greenlit China-bound exports of Nvidia’s second-most-advanced chips earlier this year — a decision China hawks in both parties question.

Deepfakes and AI misinformation get almost nothing. In an election year. With AI-generated content becoming indistinguishable from reality.

Labor displacement gets a vague nod toward “workforce development” and nothing else. After months of Meta, Block, and others laying off workers while scaling AI spending, this feels like a dodge.

No new regulatory bodies. The framework explicitly tells Congress not to create new agencies to oversee AI. Existing agencies — FTC, FDA, SEC — would handle enforcement within their current mandates. Whether they have the technical expertise and resources is debatable at best.

Innovation vs. Accountability — Pick One

This framework represents a clear philosophical choice: speed of innovation and global competitiveness over precautionary regulation. It trusts companies to self-govern and markets to self-correct. It views state regulators as obstacles rather than laboratories of democracy.

The AI arms race is real. China is investing massively. The EU’s AI Act has drawn criticism for being too heavy-handed. There’s a legitimate case that America’s edge lies in moving fast.

But the argument that “we can always regulate later” ignores a stubborn reality: once an industry becomes entrenched, regulating it becomes exponentially harder. Every year of light-touch oversight is another year of established facts on the ground.

The truth probably lives somewhere between “let the market decide” and “regulate everything.” This framework leans heavily toward the former. Whether Congress buys it — and whether it survives contact with political reality — will be one of the defining tech policy battles of 2026.

Sources: Reuters, CNBC, Politico, PBS, White House Framework (PDF)