OpenAI’s next flagship AI model is codenamed after a potato. And it might be the most important thing the company has built since ChatGPT.

Greg Brockman, OpenAI’s president, confirmed on the Big Technology podcast that Spud has finished training. It’s not an incremental GPT-4o update. It’s an entirely new base model — nearly two years of research condensed into what Brockman calls a “major step toward AGI.”

The timing is no accident. Spud lands in the same week as OpenAI’s $122 billion fundraise, the death of Sora, and Anthropic’s alarming Mythos leak. Welcome to the most consequential week in AI this year.

What We Actually Know

The confirmed details from Brockman:

  • Training is complete — one of the final milestones before public release
  • It’s a completely new base model, not a fine-tuned derivative
  • It represents two years of research reaching fruition
  • It’s designed as the foundation for all future OpenAI models

Brockman introduced an interesting concept: “big model smell.” When models get genuinely smarter, there’s a qualitative shift. They understand what you actually want rather than what you literally typed. Less prompt wrestling, more actual work getting done.

What we don’t know: benchmarks, parameter counts, pricing, release date. Anyone citing specifics is guessing.

OpenAI Killed Sora for This

The underreported angle here is what OpenAI sacrificed. They shuttered Sora — their billion-dollar video generation bet — and torpedoed a Disney licensing deal in the process. Disney was reportedly blindsided.

Brockman’s explanation was blunt: video generation was a distraction from the real prize. “We have definitively answered that question,” he said about how far text intelligence can go. “It is going to go to AGI.”

That’s a massive bet. Sora was one of OpenAI’s most publicly exciting products. Killing it means the company believes the road to AGI runs through language and reasoning — not through generating pretty videos.

A Three-Way Collision

Spud isn’t arriving in a vacuum. This week alone:

Anthropic’s Mythos leaked days before Spud news broke. Anthropic described it as a “step change” in capabilities — then privately warned government officials it could make large-scale cyberattacks significantly more likely in 2026. CNN called it a potential “watershed moment” for cybersecurity, and not in a good way.

Microsoft launched three new foundation models — MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2. Their most aggressive push to build an AI stack independent of OpenAI. Forbes called it a “meaningful hedge” against their OpenAI dependency.

Three major players, three massive moves, one week. This isn’t coincidence. It’s a capability race going full throttle.

What Spud Actually Changes

If it delivers on even half the hints, here’s what shifts:

Better agents. The biggest AI bottleneck isn’t raw intelligence — it’s reliability over multi-step tasks. Current models drift, compound errors, and can’t recover from surprises. A stronger base model directly improves every agent built on top of it.

Less prompt engineering. That “big model smell” concept points to something real. A model that understands context and intent — rather than pattern-matching your prompt — eliminates the friction that makes AI tools frustrating for normal people.

Longer, reliable context. Current models degrade at the edges of their context windows. Contract review, research synthesis, large codebases — all suffer. A next-gen model should push these limits hard.

Natural interaction. Brockman described a future where people reach for AI “without thinking very much” — like opening a browser. That requires understanding current models don’t quite achieve.

The $122 Billion Elephant

Let’s be honest. OpenAI has been talking about AGI for years. Sam Altman’s timeline claims make most independent researchers wince.

Vanity Fair nailed it: “These kinds of leaks inevitably inspire breathless credulity on one side and cynical takes about the endless, investment-seeking hype cycle on the other. The reality is probably somewhere in between.”

An anonymous OpenAI researcher offered a measured take: “All newer models are better than older ones — and models have been close to or exceeded human intelligence for quite some time.” True, and also a non-answer. Getting better and achieving AGI are different claims.

The $122 billion fundraise adds complexity. OpenAI needs to justify that valuation, and nothing juices investor confidence like AGI talk. That doesn’t mean Spud isn’t impressive. It means we should evaluate with clear eyes.

Worth noting: OpenAI renamed its product group to “AGI Deployment.” It launched a $1 billion foundation for medical research. It’s preparing policy papers about “rethinking the social contract” for the superintelligence era. These aren’t moves from a company shipping an incremental update. Whether they’re right about the magnitude — that’s the trillion-dollar question.

What to Watch Next

OpenAI plans to release Spud alongside policy papers on economic disruption and industrial policy, led by Altman, chief futurist Joshua Achiam, and VP of global affairs Chris Lehane. The papers will include “conversation starters” about wealth redistribution and building “superintelligence that works for everyone.”

Given Altman’s previous UBI study — where benefits faded by years two and three — skepticism about whether these proposals are substantive or performative is warranted. But coupling a model launch with policy papers suggests OpenAI believes Spud’s capabilities will be disruptive enough to need the conversation.

Meanwhile, Anthropic’s Mythos lurks with its alarming capability profile, and Microsoft is quietly building a parallel stack. The next few weeks could reshape the entire industry’s competitive dynamics.

Whatever Spud ends up being called when it ships, the boring potato name will be the least interesting thing about it.