If you needed proof the AI infrastructure arms race isn’t slowing down, Nvidia and Meta just handed it to you on a silver platter worth tens of billions of dollars.
The two companies announced a multiyear, multigenerational partnership that will put millions of Nvidia processors inside Meta’s data centers. Not just GPUs — standalone CPUs, next-gen hardware, networking, and confidential computing for WhatsApp. This isn’t a purchase order. It’s a strategic alignment that reshapes who controls AI’s physical backbone.
The Scope Is Staggering
Meta will receive millions of current-gen Blackwell GPUs alongside upcoming Vera Rubin systems that recently entered production. Securing a big Rubin allocation puts Meta ahead of competitors still fighting for Blackwell capacity.
But the real story is standalone Grace CPUs. Meta becomes the first major hyperscaler to deploy Nvidia’s Grace processors independently — not bundled with GPUs in integrated server configs. This is the first large-scale Grace-only deployment anywhere.
The deal also includes Spectrum-X Ethernet networking for linking GPU clusters and Nvidia Confidential Computing for privacy-preserving AI on WhatsApp. Next-gen Vera CPUs are slated for 2027, extending this partnership well into the future.
Follow the Money
Financial terms weren’t disclosed, but analyst Ben Bajarin of Creative Strategies told CNBC the deal is “certainly in the tens of billions.” Given the context, that sounds conservative.
Consider: Meta plans to spend up to $135 billion on AI in 2026 and has committed $600 billion in U.S. infrastructure by 2028 — 30 data centers, 26 stateside. Two facilities already under construction tell the story:
- Prometheus — a 1-gigawatt site in New Albany, Ohio
- Hyperion — a 5-gigawatt facility in Richland Parish, Louisiana
A 5-gigawatt data center consumes roughly the same electricity as Los Angeles. Meta isn’t building infrastructure. It’s building power grids.
Why Standalone CPUs Matter More Than the GPUs
Everyone will focus on the GPU numbers. The CPU move is more important.
Until now, Nvidia’s Grace CPUs shipped inside integrated Grace-Blackwell “superchips” — tightly coupled packages for training massive models. Meta using Grace standalone signals something the industry has been anticipating: the inference era is arriving at scale.
As AI shifts from training models to running them — powering agentic workflows, real-time recommendations, billions of daily interactions across Meta’s apps — the compute needs change. Inference workloads need efficient, high-throughput CPUs without the overhead of full GPU racks.
“They’re really designed to run those inference workloads, run those agentic workloads, as a companion to a Grace Blackwell/Vera Rubin rack,” Bajarin explained.
This puts Nvidia in direct competition with Intel and AMD in the server CPU market they’ve dominated for decades. AMD stock dropped ~4% on the news. Nvidia isn’t just the GPU company anymore — it’s coming for the entire data center stack.
The Zuckerberg Bet
Zuckerberg framed this as supporting Meta’s push “to deliver personal superintelligence to everyone in the world.” Marketing? Maybe. But Meta’s been developing Avocado, a successor to Llama, and this deal gives it compute at a scale few organizations on Earth can match.
The open question: can Meta turn silicon into products people want? Wall Street is conflicted — Meta’s stock saw its worst day in three years in October 2025 after announcing aggressive AI spending, then popped 10% in January on strong revenue guidance. The market wants to believe. It also wants receipts.
What This Breaks
This deal doesn’t exist in a vacuum. Three things to watch:
Software is getting crushed. While hardware players deepen partnerships, enterprise software stocks are in freefall. Salesforce (-28% YTD), ServiceNow (-30% YTD). The software ETF IGV is down 23% in 2026, entering bear market territory. Investors fear AI will eat traditional SaaS.
The hyperscaler duopoly tightens. Meta, Microsoft, Google, and Amazon are absorbing an outsized share of global AI compute. This deal concentrates that power further, squeezing smaller AI labs and startups.
Nvidia’s full-stack play is real. GPUs, CPUs, networking, security — all under one roof. Great for Nvidia’s margins. Concerning for everyone else’s competitive position.
What to Watch Next
Four developments will determine whether this deal ages well:
- Avocado — Can Meta’s next-gen model justify this spending?
- Grace benchmarks — How does standalone Grace perform against AMD EPYC and Intel Xeon in real inference workloads?
- Vera Rubin timeline — 2027 deployment is ambitious. Production ramps rarely go smoothly.
- Competitor response — Do Google and Amazon ink similar mega-deals, or double down on custom silicon?
The Bottom Line
The companies building AI’s physical foundation are playing a different game than the ones building its software layer. Right now, the infrastructure players are winning — and Meta just placed one of the largest bets in tech history that they’ll keep winning.
Whether all that silicon translates to products people actually use? That’s a $135-billion-a-year question Meta still hasn’t answered.