When Mira Murati quietly left OpenAI in late 2024, the AI world held its breath. Now we know what she’s been building — and Nvidia just wrote a very large check to prove it matters.

On Tuesday, Nvidia and Thinking Machines Lab announced a multiyear strategic partnership: a “significant investment” from Nvidia plus a commitment to deploy at least one gigawatt of next-gen Vera Rubin systems starting early 2027. Industry estimates peg the infrastructure cost at roughly $50 billion.

This isn’t another AI funding headline. It’s a signal flare about where the industry is heading — and the uncomfortable questions nobody’s answering.

From OpenAI’s CTO to Running the Hottest Lab in AI

Murati’s trajectory reads like a Silicon Valley thriller. As OpenAI’s CTO, she was the public face of ChatGPT’s launch and briefly served as interim CEO during the Sam Altman boardroom chaos of November 2023. She departed in September 2024, launched Thinking Machines Lab as a public benefit corporation by February 2025, and hasn’t slowed down since.

The pitch: build AI that’s “more widely understood, customizable and generally capable.” Deliberately vague. The company has revealed only one product — Tinker, an API for fine-tuning AI models that launched last October.

Despite the secrecy, a $2 billion seed round led by Andreessen Horowitz last July valued the company at $12 billion. One of the largest seed-stage valuations in tech history. Nvidia participated in that round too.

What a Gigawatt of AI Compute Actually Looks Like

One gigawatt of Nvidia Vera Rubin systems isn’t just a lot of GPUs. It’s a computing installation rivaling the power consumption of a small city — enough to run roughly 750,000 American homes.

Vera Rubin is Nvidia’s most advanced platform, the successor to Blackwell architecture. For any lab trying to train frontier models, access to top-tier GPUs is the single biggest bottleneck. OpenAI has its $300 billion Oracle deal. Anthropic locked in $30 billion with Microsoft Azure. Now Murati has her seat at the table.

“NVIDIA’s technology is the foundation on which the entire field is built,” Murati said. Jensen Huang, never one to undersell, called AI “the most powerful knowledge discovery instrument in human history.”

The Talent Drain Nobody Wants to Talk About

Here’s where it gets complicated. While Thinking Machines secures massive compute deals, it’s bleeding talent at an alarming rate.

Since its founding barely a year ago, the startup has lost:

  • Barret Zoph, co-founder and CTO — returned to OpenAI
  • Luke Metz, co-founder — returned to OpenAI
  • Sam Schoenholz, researcher — returned to OpenAI
  • Andrew Tulloch, co-founder — left for Meta
  • Jolene Parish, founding member (security) — departed
  • At least two additional founding members who left for Meta in recent weeks

When your CTO and multiple co-founders leave for your former employer, it raises serious questions about internal dynamics, research direction, or simply the gravitational pull of organizations with more resources. A company’s ability to retain its founding team is usually a more reliable predictor of success than the size of its compute budget.

Murati appears unfazed — at least publicly. The Nvidia deal is clearly a statement: Thinking Machines is here to stay, regardless of who walks out the door.

Nvidia’s Circular Economy: Genius or House of Cards?

This deal intensifies one of the most debated dynamics in AI: Nvidia as both supplier and investor in the companies buying its chips.

The pattern is hard to ignore:

  • Nvidia invested $30 billion in OpenAI, which spends billions on Nvidia GPUs
  • Nvidia invested $10 billion in Anthropic, which runs on Nvidia hardware
  • Nvidia invested in Thinking Machines’ seed round, now makes a further investment — while Thinking Machines commits to buying gigawatts of Vera Rubin chips

Critics have drawn explicit comparisons to the vendor-financing structures of the late-stage dot-com bubble. One INSEAD analysis called these arrangements “uncomfortably similar” to the 1990s, when telecom companies financed their own customers to create the illusion of demand.

Nvidia has rejected the “circular” label, insisting these are strategic bets on long-term growth. And there’s a key difference: Nvidia is the world’s most valuable company with massive real cash flows. But the question lingers — if AI startups are partially funded by their chip supplier, and they use that funding to buy chips from that same supplier, where does real demand begin and financial engineering end?

What Is Thinking Machines Actually Building?

This is the $12 billion question — literally.

What we know:

  • Tinker API: Fine-tuning of AI models. Useful, but not frontier research.
  • Mission: AI that is “understandable, customizable and collaborative”
  • Structure: Public benefit corporation
  • Partnership language: “Broadening access to frontier AI and open models for enterprises, research institutions and the scientific community”

The emphasis on “customizable” and “open models” positions Thinking Machines somewhere between OpenAI’s closed approach and Meta’s open-source play. Murati may be building the infrastructure layer — tools that let others build on top of frontier models rather than competing head-to-head on training.

If that’s the play, a gigawatt of Vera Rubin compute makes strategic sense. You need massive compute not just to train your own models, but to offer that capacity as a service.

AI’s $4 Trillion Infrastructure Bet

Jensen Huang recently predicted companies could spend $3 to $4 trillion on AI infrastructure by the end of the decade. The Thinking Machines deal is one data point in a buildout with no historical parallel.

Just this month: Yann LeCun’s AMI Labs raised $1.03 billion. Rhoda AI emerged from stealth with $450 million for robot intelligence. Nvidia’s GTC conference starts next week. The Anthropic-Pentagon saga continues reshaping AI governance.

The industry is bifurcating in real time. On one side: massive infrastructure plays backed by tens of billions. On the other: growing public backlash, regulatory uncertainty, and legitimate questions about whether the returns will ever justify the investment.

The Bottom Line

The bullish case is straightforward: Murati is one of the most capable AI leaders alive, she now has essentially unlimited compute, and her mission addresses a real gap in the market.

The bearish case is equally compelling: a year-old company with massive co-founder attrition, no publicly demonstrated frontier research, funded in part by its own chip supplier, valued at $12 billion on pedigree and promise.

With a gigawatt of Nvidia’s best hardware on order, Thinking Machines Lab is betting big on whatever it’s building behind closed doors. The rest of us will have to wait and see if the reveal matches the hype.