The AI chip wars just escalated from skirmish to full-blown conflict.

Meta and AMD announced a multiyear deal worth over $100 billion — six gigawatts of AMD computing power, plus warrants giving Meta up to 160 million AMD shares at a penny each. That’s roughly 10% of the entire company.

Read that again. The company that owns Facebook, Instagram, and WhatsApp could soon own a tenth of its chip supplier. This isn’t a procurement contract. It’s a hostile restructuring of the AI hardware economy.

The Deal’s Architecture

Meta will purchase AMD’s MI540 Instinct GPUs and latest-generation EPYC “Venice” CPUs for its next-gen AI data centers. Six gigawatts of compute. For context, one gigawatt powers roughly 750,000 homes.

The equity component is where it gets creative. AMD issued Meta performance-based warrants for 160 million shares at $0.01 each — but the full award only vests if AMD’s stock hits $600 by 2031. AMD closed at $196.60 the day before the announcement. That’s a triple-or-nothing bet.

The first tranche vests after AMD ships one gigawatt of capacity, with shipments starting in the second half of 2026. AMD gets a guaranteed mega-customer. Meta gets chips at an effective discount backed by equity upside. Both sides have skin in the game.

Lisa Su ran the same playbook with OpenAI back in October 2025. She’s not improvising — she’s executing.

Why Meta Is Hedging Against Nvidia

Meta is spending up to $135 billion on AI infrastructure in 2026. Nearly double the $72 billion it spent in 2025. Weeks before the AMD deal, it also signed a multiyear agreement with Nvidia for NVL72 rack-scale systems using Blackwell GPUs.

So Meta isn’t dumping Nvidia. It’s building a platform-agnostic compute empire.

Nvidia gets the heavy-duty training workloads and bleeding-edge inference. AMD gets the massive inference deployments that need scale more than raw peak performance. Being entirely dependent on a single supplier — one that charges premium prices and faces its own supply constraints — is a vulnerability no company spending this kind of money can afford.

Analyst Jon Peddie put it plainly: “Meta is taking a hybrid approach, with AMD becoming a major, if not primary, partner for specific AI inference workloads.”

Lisa Su’s Self-Reinforcing Loop

This deal is the culmination of a decade-long transformation. Su turned AMD from a perennial underdog into Nvidia’s most credible rival, and the equity-for-chips model is her masterstroke.

By giving large customers a financial stake in AMD’s success, she’s created a flywheel: the more AMD chips Meta buys, the more valuable AMD becomes, the more Meta’s warrants are worth. Traditional supplier relationships can’t touch that kind of alignment.

“The CPU market is absolutely on fire,” Su told investors. She’s right. As AI shifts from training to inference — where efficiency and cost-per-query dominate — the competitive landscape cracks wide open. AMD’s dual position in both GPUs and CPUs gives it a unique edge.

The ‘Personal Superintelligence’ Play

Here’s the part nobody’s talking about enough.

Mark Zuckerberg has been pushing “personal superintelligence” as Meta’s north star — AI systems designed to deeply understand and empower individuals in their everyday lives. Late last year, Meta’s AI labs reportedly pivoted away from competing on frontier models to focus on this vision. In January, Meta spun up a new “Meta Compute” organization specifically for gigawatt-scale data center operations.

What does personal superintelligence actually require? Inference. Mountains of it. Not the massive training runs that produce GPT-5 or Claude, but sustained, always-on compute for personalized AI agents serving 3+ billion users simultaneously.

That’s a workload that favors cost-efficiency over peak performance. Exactly where AMD’s value proposition lives. Six gigawatts suddenly makes sense.

The Bubble Question

Not everyone’s celebrating. The equity-for-chips model — AI companies investing in chip companies who invest in AI companies — has raised familiar circular financing concerns. It’s the same pattern that made observers nervous about Nvidia’s $30 billion OpenAI investment.

The worry: if AI revenue doesn’t materialize fast enough, these enormous commitments become liabilities. Meta’s $135 billion capex target is an extraordinary bet that AI generates real, sustained returns.

The counterargument: Meta already prints money from advertising. AI-powered recommendations are already driving measurable improvements in engagement and ad targeting. Unlike pure-play AI startups, Meta has a cash cow funding these bets.

The real question is whether personal superintelligence delivers enough value to justify this level of infrastructure — or whether it remains a visionary concept perpetually a few years from paying off.

What This Means for the Chip Landscape

AMD is now a serious contender. With OpenAI and Meta locked into multi-gigawatt, equity-backed deals, AMD has guaranteed demand funding its R&D pipeline for years.

Nvidia’s monopoly is eroding. Not collapsing — Nvidia still dominates training and premium inference. But the single-vendor era is ending. Meta’s dual-sourcing strategy will become the industry norm.

Custom silicon is the wildcard. Meta is developing its own MTIA chips (though the program has hit delays). Google has TPUs. Amazon has Trainium and Inferentia. The long-term trajectory points toward a diverse ecosystem.

CPUs matter again. AMD’s EPYC server CPUs are an underrated part of this deal. As inference scales, the CPU’s role in orchestrating and managing AI workflows becomes critical.

The AI infrastructure buildout is accelerating. But the winners are no longer predetermined.

Lisa Su just made sure of that.