The AI chip wars just got a lot more interesting. Broadcom CEO Hock Tan dropped a number so staggering it made Wall Street’s collective jaw hit the floor: AI chip revenue exceeding $100 billion in 2027. Not total revenue — just the AI chip slice. Last quarter, that number was $8.4 billion.
This isn’t just a big number. It’s a signal that the way we build AI infrastructure is fundamentally changing — and Nvidia might not be the only kingmaker anymore.
From GPUs to Custom Silicon: The Great Migration
For years, the AI boom was synonymous with Nvidia. Want to train or run a large language model? Buy Nvidia GPUs. Full stop.
But a quiet revolution has been brewing. The world’s biggest tech companies are designing their own chips, and they’re turning to Broadcom to make it happen.
Google started the trend back in 2015 with its Tensor Processing Units. Now the roster includes six confirmed heavy hitters: Google, Meta, Anthropic, OpenAI, ByteDance, and Fujitsu. Each has decided that general-purpose GPUs aren’t efficient enough for their specific AI workloads — and they’re putting billions behind that conviction.
The math is compelling. Custom ASICs deliver significantly better performance-per-watt for specific workloads. And as the industry shifts from training-heavy to inference-heavy — more than 70% of AI data center revenue now comes from running models rather than training them — that efficiency advantage becomes a financial imperative.
The Gigawatt Club
Broadcom’s earnings call was unusually specific about customer commitments, and the numbers are staggering.
Anthropic — maker of Claude — is deploying 1 gigawatt of Broadcom-fabricated TPUs in 2026, scaling to 3 gigawatts in 2027. A tripling of compute capacity in a single year.
Google continues as the anchor customer with “even stronger demand” as it rolls out its seventh-generation Ironwood TPU. Apple and Anthropic are among the notable cloud TPU users.
Meta is pushing ahead with its MTIA architecture, planning “multiple gigawatts” of Broadcom’s XPU accelerators in 2027. When analysts questioned whether Meta’s custom chip program was losing steam, Tan was blunt: “The MTIA roadmap is alive and well.”
OpenAI — which only partnered with Broadcom in late 2025 — is aiming for over 1 gigawatt of custom XPU compute in 2027. Remarkably fast for a company building its first-ever custom processor.
Bernstein analyst Stacy Rasgon tallied it up: roughly 3 gigawatts for Anthropic, 3 for Google, 2+ for Meta, 1 for OpenAI — with ByteDance and Fujitsu adding more. Tan’s response? The estimates were “not far” off.
Why Nobody Can Replicate This
Here’s what makes Broadcom’s position so defensible: designing a chip is hard, but manufacturing one at scale is brutally harder.
“Anybody can design a chip in a lab that works well,” Tan said. “Can you produce 100,000 of those chips quickly, at yields that you can afford? And we do not see too many players in the world that can do that.”
Broadcom doesn’t fabricate chips — that’s TSMC’s job. Broadcom’s real value is in the middle: translating a customer’s chip design into physical layouts that actually manufacture at scale. Advanced 3.5D packaging, high-bandwidth memory integration, yield optimization, networking infrastructure to connect thousands of chips together.
Tan was almost dismissive of the idea that any hyperscaler could replicate this capability “for many years to come.” It’s a classic competitive moat: the more customers Broadcom wins, the more experience it accumulates, the harder it becomes to compete with.
What This Means for Nvidia
Nobody’s writing Nvidia’s obituary. The company disclosed 5 gigawatts of sales to OpenAI alone last week, and AMD has signed deals of up to 6 gigawatts with both OpenAI and Meta. Nvidia’s CUDA ecosystem remains a fortress.
But custom silicon represents a genuine structural shift. When inference dominates the workload mix — and it already does — the economics favor purpose-built chips over general-purpose GPUs. Every dollar that flows to a Broadcom ASIC is a dollar that doesn’t go to Nvidia.
The emerging picture is a bifurcated market: Nvidia dominates training and general-purpose workloads, while Broadcom powers custom silicon for inference at scale. The Next Platform called Broadcom “the biggest counterbalance to Nvidia” — remarkable for a company many still associate with Wi-Fi chips.
The Supply Chain Advantage Nobody’s Discussing
One detail deserving more attention: Broadcom has already secured the supply chain for its 2027 targets — including high-bandwidth memory, which has been in chronically short supply.
HBM shortages have constrained the entire AI chip industry. If Broadcom has locked in supply through 2028, it has a timing advantage that smaller custom chip efforts can’t match.
These aren’t handshake deals. They’re binding, multi-billion-dollar commitments from some of the world’s most valuable companies.
The Numbers
Broadcom’s actual financials ground the ambition in reality:
- Q1 2026 revenue: $19.3 billion (up 29% YoY)
- AI chip revenue: $8.4 billion (up 106% YoY)
- Q2 2026 guidance: $22.0 billion total, ~$10.5 billion from AI
- AI networking growth: 60% YoY
- 2027 AI chip target: “Significantly in excess of $100 billion”
That last number implies roughly a 10x increase from current run rates in under two years. Even accounting for CEO optimism, the trajectory is extraordinary.
The Custom Silicon Era Has Arrived
We’re witnessing a paradigm shift in AI computing. The first era repurposed gaming GPUs. The second — which we’re living through — is dominated by purpose-built Nvidia AI GPUs. The third era is custom silicon designed for specific AI architectures.
In this new era, Broadcom is the essential middleman: the company that turns Big Tech’s chip dreams into physical reality. No splashy product launches. No developer conferences. Just six of the world’s most important AI companies relying on you to build their silicon.
The question isn’t whether custom AI chips will grow. They will. The question is whether the shift from GPUs to ASICs happens fast enough to materially dent Nvidia’s dominance before Nvidia adapts.
Either way, the AI chip market just got a lot more competitive. And Broadcom’s $100 billion projection is a number nobody can afford to ignore.