The company building the machines that might replace your job now has ideas about how to make that transition less painful. Whether you trust the fox to design the henhouse security system is another question entirely.

OpenAI dropped a 13-page policy paper called “Industrial Policy for the Intelligence Age” proposing four-day work weeks with no pay cuts, a public wealth fund giving every American a stake in AI growth, and shifting the tax burden from labor to capital — including potential “robot taxes.”

From an $852 billion company whose core product automates human cognitive labor, this is either remarkable corporate self-awareness or the most sophisticated PR move in tech history.

A New Deal for the AI Age

OpenAI explicitly compares the current moment to the Progressive Era and New Deal — periods when industrial upheaval forced sweeping policy changes to protect workers from being ground up by progress.

The framework rests on three pillars: distributing AI-driven prosperity broadly, building safeguards against systemic risks, and ensuring widespread access so economic power doesn’t concentrate into too few hands.

The four-day work week proposal is straightforward: if AI makes workers significantly more productive, that productivity gain shouldn’t flow entirely to shareholders. Workers should get some of it back as time.

JPMorgan’s Jamie Dimon has said AI will eventually shave the work week to three and a half days. The global four-day work week movement has shown maintained or improved productivity across multiple country trials.

But as TechCrunch pointed out — if automation eliminates your job entirely, your employer-subsidized healthcare and retirement match vanish with it. OpenAI proposes portable benefit accounts, but these still depend on employer contributions.

Robot Taxes and a Public Wealth Fund

The most politically charged proposal: higher taxes on corporate income, AI-driven returns, and capital gains at the top. Plus a “robot tax” — automated systems paying taxes equivalent to the human workers they replace.

Remarkable positioning from a company whose executives have funneled hundreds of millions into super PACs supporting light-touch AI regulation.

The public wealth fund would give all U.S. citizens an automatic stake in AI companies and infrastructure — like an Alaska Permanent Fund for the AI age. Everyone gets a dividend from the machines.

On paper, this addresses a real problem. The AI boom has created enormous wealth for shareholders while the broader public watches from the sidelines. Structurally, it would prevent gains from pooling exclusively at the top.

The Critics Aren’t Buying It

Eryk Salvaggio at TechPolicy.Press called the document a “policymercial” — marketing copy dressed as policy. His sharpest observation: many of OpenAI’s proposals mirror provisions of California’s SB1047, an AI safety bill that OpenAI actively lobbied to kill. That bill called for third-party audits, incident reporting, and whistleblower protections. OpenAI’s new paper proposes… auditing regimes, incident reporting, and mechanisms for public input.

“OpenAI has ultimately co-opted the idealism of public infrastructure while actively undermining concrete steps toward it,” Salvaggio wrote.

Former senior AI policy advisor Soribel Feliz was blunter: “Some of these pillars have been the framework for every major AI governance conversation since ChatGPT came out. I worked in the U.S. Senate in 2023–24, and we had nine AI policy sessions where all of this was said.”

The Timing Says Everything

The paper dropped the same day The New Yorker published a year-and-a-half investigation into OpenAI questioning Sam Altman’s trustworthiness. It arrives as the company prepares for an IPO, amid intensifying public anxiety about AI job displacement.

Professor Gina Neff of Cambridge put it perfectly: “The difference now is that OpenAI wants other companies to pay workers more while also paying them for subscriptions to their services.”

What Actually Happens Next

Almost none of this becomes policy anytime soon. The Trump administration has pushed for minimal AI regulation. Congress passed the TAKE IT DOWN Act targeting deepfakes, but broader regulation has gone nowhere.

Oxford Economics’ Adam Slater adds a useful reality check: past technological transitions showed potential for large productivity gains that “can take decades to materialise and can also tail off surprisingly quickly.”

But dismiss this paper entirely at your own risk. Even the company building the displacement machine acknowledges that the current trajectory is unsustainable without structural intervention. When the arsonist starts talking about fire safety, you should probably listen — even if you also watch your matches.

The four-day work week conversation isn’t going away. The question is whether OpenAI actually wants policy to change, or whether it prefers the conversation to remain exactly where it is: perpetually starting.