Nvidia doesn’t just want to sell you the shovels anymore. It wants to dig the gold too.

Buried inside a financial filing and confirmed by executives in interviews with WIRED, Nvidia plans to spend $26 billion over five years building open-weight AI models. To prove this isn’t vaporware, they simultaneously dropped Nemotron 3 Super — a 128-billion-parameter beast with a hybrid Mamba-Transformer architecture that’s already topping agentic AI benchmarks.

This is the most strategically significant move in AI since Meta released the original Llama. Here’s why it matters.

America’s Open-AI Gap

The best proprietary models in the US — GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro — are locked behind paywalls. Meanwhile, China has been flooding the zone with open-weight alternatives. DeepSeek, Alibaba’s Qwen, Moonshot AI — these models are free, downloadable, and increasingly excellent.

The result: a growing chunk of the global AI ecosystem runs on Chinese foundations. Startups in Europe, Southeast Asia, and Latin America are fine-tuning Chinese models because that’s where the open innovation is happening.

Nvidia wants to fill that gap. As Bryan Catanzaro, Nvidia’s VP of applied deep learning research, told WIRED: “It’s in our interest to help the ecosystem develop.”

Translation: if the open-source future of AI is getting built on someone’s models, Nvidia would prefer those models run best on Nvidia hardware.

What Makes Nemotron 3 Super Different

This isn’t a token gesture. The architecture is genuinely interesting:

  • Hybrid Mamba-Transformer MoE: Combines Transformer attention with Mamba’s state-space efficiency. You get quality on complex reasoning and speed on long-context tasks.
  • Smart parameter use: Not all 128B parameters fire on every query. The mixture-of-experts design keeps it large without being proportionally slow.
  • NVFP4 on Blackwell: Running in Nvidia’s custom 4-bit format on Blackwell hardware delivers up to 4x faster inference versus FP8 on Hopper — with no accuracy loss.
  • Benchmark results: Score of 37 on the Artificial Intelligence Index (vs. 33 for GPT-OSS), plus #1 on PinchBench for agentic reasoning and top marks on DeepResearch Bench.

Nvidia is honest that some Chinese models still score higher on certain benchmarks. This isn’t about winning every leaderboard. It’s about providing a competitive, American-made open alternative.

The Hardware Lock-In Play (And Why That’s Fine)

Let’s not pretend this is charity. Every open model Nvidia releases is optimized for Nvidia GPUs. Every startup that builds on Nemotron gets deeper into the CUDA ecosystem. Every researcher fine-tuning on Nvidia hardware is less likely to switch to AMD, Intel, or — critically — Huawei.

That Huawei angle matters. Rumors are swirling that DeepSeek’s next model was trained entirely on Huawei’s Ascend chips, bypassing US sanctions. If that model works well and it’s open, it could convince organizations they don’t need Nvidia at all.

Nvidia’s open-model strategy is a preemptive strike. Flood the ecosystem with high-quality models that run best on your silicon, and you create gravitational pull that’s hard to escape. As Kari Briski, Nvidia’s VP of generative AI software, put it: “We build it to stretch our systems and test not just the compute but also the storage and networking.”

Better models make better hardware which makes better models. It’s a flywheel.

The Geopolitics Nobody’s Talking About

While the US government is embroiled in a messy confrontation with Anthropic over Pentagon AI contracts, and export controls on chips to China are under constant review, the question of who controls the AI stack is intensely political.

Nathan Lambert of the Allen Institute’s ATOM Project sees Nvidia’s move as filling a vacuum: the US has no major government-funded open AI model program while China’s state-backed labs release competitive models freely.

Andy Konwinski of the Laude Institute called it “an unprecedented signal of their belief in openness,” noting Nvidia sits “at the front of so many open and closed AI efforts.”

The world’s most valuable company is betting $26 billion that the future of AI is open. Not because it’s idealistic — because it’s strategic. And in this case, strategic and good for the ecosystem happen to align.

What This Means for Developers

For anyone actually building things, this is straightforward good news:

More options. Until now, top-tier open models meant Chinese (Qwen, DeepSeek) or Meta’s Llama. Nemotron offers a US-based alternative with serious institutional backing.

Better tooling. Nvidia isn’t just dropping weights. They’re sharing training techniques, architecture docs, and deep integration with the CUDA/NeMo ecosystem.

Specialized models coming. Domain-specific models for robotics, climate modeling, and protein folding already exist. Expect more verticals.

550B on deck. Catanzaro confirmed a 550-billion-parameter model has finished pretraining. When it drops, it could rival the largest proprietary models while remaining fully open.

The trade-off? These models will run best on Nvidia hardware. But most of the AI world already runs on Nvidia GPUs, so that’s a trade-off most developers will accept without blinking.

The Proprietary Moat Is Shrinking

If the biggest hardware company in AI is investing $26 billion in open models, what does that say about the long-term viability of closed AI?

Meta showed cracks with Llama. DeepSeek proved you could build frontier models on a fraction of the cost. Qwen proved you could maintain an open ecosystem at scale. Now Nvidia is throwing rocket fuel on the open side.

The proprietary labs still have advantages in RLHF, safety tuning, and the massive data pipelines from consumer products. But the gap is narrowing fast.

What Comes Next

All eyes turn to GTC 2026 kicking off next week. Jensen Huang has teased “several new chips the world has never seen before,” and Nemotron is clearly the opening act.

Expect details on the 550B model, deeper Nemotron integration with NeMo and NIM, and possibly government partnerships looking for alternatives to Chinese open models.

Nvidia just changed the game. The company that built the infrastructure for the AI revolution is now building the AI itself — and giving it away. The only question is whether $26 billion is enough to tip the balance back toward the US, or whether China’s head start is too large to overcome.