Stop watching Congress if you want to understand where AI regulation is heading. Watch Sacramento.
Governor Gavin Newsom just signed what his office calls a “first-of-its-kind” executive order that tells AI companies something they haven’t heard from American government in a while: prove your technology won’t hurt people, or lose access to the world’s fourth-largest economy.
It’s a direct, unmistakable middle finger to the Trump administration’s deregulatory stance on AI. And it might be the most consequential AI policy move of 2026 so far.
What the Order Actually Does
Cut through the political theater and there’s real substance here:
AI companies seeking California state contracts must certify their systems include safeguards against three categories of harm: generating illegal content (including CSAM), deploying models with unmitigated harmful bias, and violating civil rights and free speech.
California is decoupling procurement from the federal government. This is the quiet bombshell. If the Pentagon labels an AI company a “supply chain risk” — as it recently did with Anthropic — California reserves the right to run its own independent assessment. The state won’t automatically defer to Washington’s judgment on which AI companies are safe to work with.
State agencies must watermark AI-generated content. California becomes the first state to formalize watermarking requirements for AI-generated images and manipulated video at the executive level.
A 120-day deadline for formal certification framework proposals, complete with attestation requirements for responsible AI governance.
The Anthropic Connection
The timing isn’t subtle. It isn’t meant to be.
Weeks before Newsom’s order, the Department of Defense designated San Francisco-based Anthropic as a “supply chain risk,” effectively blacklisting the company from military contracts. Anthropic’s crime? Refusing to remove contract clauses prohibiting Claude’s use for domestic mass surveillance and fully autonomous weapons.
A federal judge blocked the designation as “likely unlawful” and potentially retaliatory. White House AI czar David Sacks publicly called Anthropic “not always on the side of the angels.”
Newsom’s order is California throwing a protective arm around one of its most valuable startups. The message: even if Washington wants to punish AI companies for having safety principles, California will evaluate them on merit.
And remember — OpenAI, Anthropic, Meta’s AI division, and Google DeepMind’s operations are all headquartered in California. When the state sets procurement standards, the ripple effects extend far beyond Sacramento.
America’s AI Regulation Is Fracturing
Here’s the full picture of what’s colliding:
The White House unveiled a national AI framework in March that takes what critics call a “light-touch” approach — conspicuously avoiding bias, discrimination, or civil rights. Trump has signed executive orders explicitly discouraging states from regulating AI, pushing a single nationwide approach that favors industry self-governance.
But Congress is deadlocked. And states aren’t waiting.
California, Colorado, New York, and Utah are all advancing their own AI governance frameworks. Colorado’s comprehensive AI Act takes effect in June 2026. New York’s RAISE Act is moving through the legislature. The legal question of whether the federal government can actually preempt state AI laws remains unresolved.
Newsom is betting that California’s economic gravity makes its standards the de facto national ones. Just as California emissions standards effectively set the national standard for automobiles, California’s AI procurement requirements could become the floor that responsible AI companies build to — regardless of where their customers sit.
The Political Chess Behind the Policy
This isn’t purely about protecting Californians, and nobody pretends it is.
Newsom is widely expected to run for president. His AI positioning is a careful balancing act between labor unions — who made it explicit in February that they won’t support his bid without stronger worker protections from AI — and Big Tech donors pouring money into California politics.
The order threads the needle: it demands guardrails while simultaneously encouraging state agencies to expand AI adoption. California is already deploying “Poppy,” a generative AI assistant, across more than 20 departments. The order includes a statewide public engagement initiative on AI’s workforce impact — a direct nod to organized labor.
It’s “regulate but accelerate.” Whether that’s genuinely balanced policy or having-it-both-ways politics depends entirely on your priors.
What This Means for AI Companies
If you’re building AI and selling to California — or planning to — here’s the new reality:
Within 120 days, expect a formal certification framework. You’ll need to attest to responsible AI governance, demonstrate safeguards against illegal content, show bias mitigation, and prove civil rights compliance.
Watermarking is mandatory for AI-generated images and video destined for state use.
Federal blacklists don’t automatically apply. Companies caught in Washington’s political crossfire — like Anthropic — can still compete for California business on merit.
The opportunity is growing alongside the requirements. California wants more AI in government, not less. Companies that can demonstrate responsible practices will find an eager customer.
Three Things to Watch
The certification framework (due in ~120 days): How stringent will the actual requirements be? Will they mirror the EU AI Act’s risk-based approach or chart a new course?
The Anthropic case: The preliminary injunction blocking the Pentagon’s supply chain risk designation is temporary. A full ruling could reshape the relationship between AI companies, the military, and state governments.
Colorado’s AI Act taking effect in June: If California and Colorado are both enforcing meaningful AI regulations by mid-2026, the pressure on Congress to act — or at least stop pretending states don’t exist — becomes immense.
The Bottom Line
There is no coherent national AI policy. There probably won’t be one anytime soon.
For AI companies, the calculus is straightforward: build to California’s standard, and you can operate everywhere. Build to the lowest common denominator, and you risk being locked out of the largest state economy on the planet.
The future of AI regulation in America won’t be decided in one place. It’s being decided in dozens of state capitals, courtrooms, and procurement offices simultaneously.
Right now, Sacramento is leading. Washington is watching. And the clock is ticking.