Jensen Huang dropped another $2 billion like it was couch change. The recipient? Marvell Technology — a company whose core business is helping hyperscalers build custom chips so they don’t have to buy Nvidia GPUs.
Read that again. Nvidia just invested $2 billion in a competitor enabler.
Huang called it “a marvelous investment” on CNBC. Yes, he really said that. Marvell’s stock popped 13%. Nvidia climbed 3-4%. Everyone made money. But the real story isn’t the dad joke — it’s the strategy hiding behind it.
The Keep-Your-Enemies-Closer Play
Marvell designs custom ASIC chips for AWS, Google, and Microsoft. These chips exist specifically so those companies can reduce their Nvidia dependency. Marvell and Broadcom are the two dominant custom AI chip houses, and they’ve been growing because customers want alternatives to Nvidia’s pricing power.
So why pour $2 billion into them?
Because of NVLink Fusion.
NVLink has historically been Nvidia’s walled garden — the proprietary interconnect fabric linking GPUs in massive AI training clusters. Nvidia-to-Nvidia, no outsiders allowed. NVLink Fusion cracks that wall open, but on Nvidia’s terms. Third-party chips can now plug into the fabric, but the fabric itself? Still Nvidia’s.
The result: even when AWS runs Trainium 4 custom silicon designed with Marvell’s help, those chips talk to each other through Nvidia’s networking, Nvidia’s switches, Nvidia’s protocols.
Build whatever chip you want. Nvidia is still the plumbing.
$6 Billion in March Alone
The Marvell deal isn’t an isolated move. In March 2026, Nvidia invested:
- $2 billion in Lumentum — co-packaged optics and laser technology
- $2 billion in Coherent — photonic components and optical switching
- $2 billion in Marvell — custom silicon and networking integration
Six billion dollars in one month, each targeting a different layer of the AI infrastructure stack. Add earlier investments in Synopsys, CoreWeave, and Nebius, and a pattern emerges: Nvidia is using its massive cash hoard (projected $150-160 billion in fiscal 2027 revenue) to lock down the entire AI supply chain.
Lumentum and Coherent secure the optical interconnect layer — lasers and photonics for next-gen data center networking. Marvell secures the custom silicon layer. Together, they create a vertically integrated ecosystem where every component, even the non-Nvidia ones, routes through Nvidia’s platform.
The Intel Inside Playbook, Reimagined
This is the Intel Inside strategy, except the “inside” isn’t the processor — it’s the networking layer.
Consider what this means for AWS. Amazon’s Trainium 4 will support both UALink (an open standard) and NVLink protocols. Marvell helps design those chips. Now Marvell sits inside the NVLink Fusion ecosystem with a $2 billion Nvidia stake tying everything together.
AWS gets its custom silicon. Marvell gets ecosystem access and a massive investment. And Nvidia gets to be the connective tissue in every AI data center — including ones that don’t use a single Nvidia GPU.
That’s not a concession. That’s domination through infrastructure.
Why Now: The Vera Rubin Factor
All of this is happening in the shadow of Nvidia’s upcoming Vera Rubin platform — the next-gen successor to Grace-Blackwell. Vera Rubin racks will be significantly more expensive, and the bandwidth requirements for next-generation AI training clusters are so extreme that traditional copper interconnects can’t keep up.
Silicon photonics isn’t a luxury at Vera Rubin scale. It’s a necessity. The $4 billion Nvidia dropped on Lumentum and Coherent ensures those components exist in volume when Vera Rubin ships.
Meanwhile, Marvell’s recent $540 million acquisition of XConn Technologies brought PCI-Express switching tech into the mix. Their new Structera S 60260 switch supports 260 lanes of PCIe and roughly 2.1 TB/sec of aggregate bandwidth. Integrate that with NVLink, and you get hybrid switching for heterogeneous AI clusters — exactly what the next generation of data centers will need.
What This Means for the AI Industry
The GPU wars are maturing. The next phase of AI competition isn’t about who makes the best chip. It’s about who controls the networking, interconnects, and system-level integration. Nvidia is betting the infrastructure layer is where long-term value accrues.
Custom silicon isn’t a threat anymore — it’s a moat. Every custom chip that plugs into NVLink Fusion strengthens Nvidia’s position. The more alternatives customers build, the more they depend on Nvidia’s interconnect to tie it all together.
Vertical integration is back. The tech industry spent decades disaggregating and specializing. Nvidia’s strategy reverses that trend — but through investment and partnerships rather than building everything internally. It’s vertical integration with plausible deniability.
AMD and Intel face a new problem. They’re not just competing against better chips. They’re competing against an entire ecosystem of financial and technical dependencies that Nvidia is weaving across the industry. Good luck replicating that with a faster GPU.
The Bottom Line
Nvidia is evolving from a chip company into an AI infrastructure platform company. Less semiconductor firm, more AT&T-meets-AWS. The $6 billion March spending spree isn’t generosity — it’s the most aggressive platform play in semiconductor history.
The bet is simple: it doesn’t matter whose processor is in the socket if Nvidia owns the fabric connecting them all.
At $150+ billion in projected revenue and growing, Nvidia has the cash to find out if it’s right. Based on what we’re seeing, the rest of the industry is already playing on Jensen’s chessboard.
They just don’t all know it yet.