There are exactly zero things Steve Bannon and Susan Rice agree on. Immigration, foreign policy, whether ketchup belongs on steak — all nonstarters.
But AI? Turns out, that’s the one.
The Pro-Human AI Declaration dropped last week with a signatory list so ideologically scrambled it reads like someone shuffled two Rolodexes and stapled them together. Bannon. Rice. Glenn Beck. Ralph Nader. Richard Branson. The AFL-CIO. SAG-AFTRA. Turing Award laureate Yoshua Bengio. Nobel economist Daron Acemoglu.
All signing the same document. All saying the same thing: humans first, machines second, no exceptions.
A Secret Meeting Nobody Saw Coming
The backstory is almost wilder than the coalition itself. In early January, about 90 political, community, and thought leaders gathered at a Marriott in New Orleans for a conference organized by Max Tegmark’s Future of Life Institute. Nobody knew who else had been invited until they walked into the room.
Church leaders sat next to union reps. Conservative academics found themselves across the table from the people who drafted Bernie Sanders to run for president. MAGA commentators and Signal Foundation president Meredith Whittaker were breathing the same air.
The deliberate omission? No one from the AI industry was invited. No Sam Altman. No Elon Musk. No Sundar Pichai. FLI director Emilia Javorsky called it “a very deliberate design choice” — at past conferences, corporate interests inevitably dominated. This time, the room belonged to the people AI would actually affect.
Five Pillars With Actual Teeth
This isn’t another vague hand-wave about “responsible innovation.” The Declaration lays out five concrete demands:
Mandatory off-switches. Every AI system needs meaningful human oversight, especially in healthcare, criminal justice, and military operations. No autonomous decision-making in life-or-death scenarios.
No AI monopolies. Democratic authority over major technological transitions. In a world where a handful of companies control the most powerful AI on Earth, this is a direct shot at Big Tech’s bow.
Protect children first. Mandatory pre-deployment testing for AI products aimed at younger users — covering suicidal ideation, mental health deterioration, and emotional manipulation. Tegmark put it bluntly: “If some creepy old man is texting an 11-year-old, the guy can go to jail. So why is it different if a machine does it?”
No AI personhood. No mass surveillance without democratic consent. No granting legal rights to algorithms.
Corporate liability. Companies that build AI systems should be legally liable for the harm those systems cause. Full stop.
And the big one: an outright ban on developing superintelligent AI until there’s scientific consensus it can be done safely and genuine democratic buy-in. No self-replicating architectures. No autonomous self-improvement. No systems that resist shutdown.
The Timing Was Uncanny
The Declaration was finalized before the Pentagon-Anthropic standoff erupted in late February, but it landed like prophecy. When Defense Secretary Pete Hegseth designated Anthropic as a “supply chain risk” for refusing to give the Pentagon unlimited access to Claude, then OpenAI swooped in with its own largely unenforceable Pentagon deal — it proved exactly what the Declaration’s authors had been warning about.
There are no rules. The most powerful technology in human history is being developed, deployed, and weaponized in a complete regulatory vacuum.
“This is not just some dispute over a contract,” Dean Ball of the Foundation for American Innovation told the New York Times. “This is the first conversation we have had as a country about control over AI systems.”
That conversation is happening. Just not in Congress.
The FDA Play
Tegmark’s most compelling argument comes down to a simple analogy: drugs.
“You never have to worry that some drug company is going to release something that causes massive harm before people have figured out how to make it safe,” he told TechCrunch, “because the FDA won’t allow them to release anything until it’s safe enough.”
His strategy is incremental. Start with mandatory testing for children’s AI products — the one thing nobody can politically oppose — then expand the scope. Maybe test that AI can’t help terrorists make bioweapons. Maybe test that superintelligence can’t overthrow the government.
Recent polling he cited shows 95% of Americans oppose an unregulated race to superintelligence. That’s not a partisan split. That’s near-unanimity.
Will It Actually Matter?
Here’s the uncomfortable part: declarations don’t pass laws. FLI’s previous attempt — the Asilomar AI Principles in 2017 — was signed by Altman, Musk, Hassabis, and Stephen Hawking. Nearly a decade later, Congress has passed exactly zero comprehensive AI legislation.
But the Pro-Human Declaration is different in one crucial way: it deliberately excluded industry voices. It’s not asking tech CEOs to police themselves. It’s a coalition of everyone else — workers, parents, faith leaders, military officials, academics, activists — telling the industry what the rules should be.
As Joe Allen, a senior fellow at Humans First, put it when reflecting on the meeting: “We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right.”
The Bottom Line
The Pro-Human AI Declaration won’t save us by itself. But it represents something genuinely new: broad bipartisan consensus that the current trajectory is unacceptable, assembled by the people who’ll be most affected rather than the companies doing the affecting.
If Steve Bannon and Susan Rice can find common ground, maybe Congress can too.
But I wouldn’t hold my breath.