If you needed proof the AI supercycle isn’t slowing down, Nvidia just handed it to you — on a $68 billion silver platter.
Wednesday evening, the chipmaker reported fiscal Q4 2026 results that beat Wall Street across the board. Revenue hit $68.13 billion, blowing past the $66.21 billion consensus. Earnings per share landed at $1.62 versus the expected $1.53. Net income nearly doubled to $43 billion — in a single quarter.
But the earnings were almost the sideshow. The real headline? Vera Rubin.
The Numbers That Matter
The data center business — now over 91% of Nvidia’s total revenue — generated $62.3 billion, a 75% year-over-year jump. Guidance was even more aggressive: $78 billion projected for Q1 2027, beating analyst expectations by $5.4 billion.
One number buried in the report deserves a spotlight: networking revenue hit $10.98 billion, up 263% year over year. Nvidia’s NVLink and Spectrum-X technologies are becoming the plumbing of AI infrastructure. You can’t have 72 GPUs talking to each other in a rack without world-class interconnects — and Nvidia has locked up that market.
This is the moat competitors can’t easily cross. AMD can make competitive GPUs. Broadcom can design custom accelerators. But Nvidia’s full-stack approach — GPU, CPU, networking, and software as one integrated system — is a different animal entirely.
Vera Rubin: 10x More Efficient Than Blackwell
The real story from earnings night was Nvidia’s first full reveal of Vera Rubin, its next-generation AI platform shipping in the second half of 2026.
The headline spec: 10 times the performance per watt compared to Grace Blackwell. In an industry where energy consumption is rapidly becoming the bottleneck for AI scaling, that’s not incremental — it’s a paradigm shift.
Each rack is a beast: 1.3 million components, 72 Rubin GPUs, 36 Vera CPUs (Nvidia’s custom Arm-based processors), roughly 1,300 total microchips. Parts sourced from 80+ suppliers across 20+ countries. Each rack weighs nearly two tons.
But the design philosophy has evolved. Where Blackwell components were soldered to the board, Vera Rubin goes modular — each superchip slides out of the rack’s 18 compute trays in seconds. Faster repairs, easier upgrades, less downtime. Boring-sounding, maybe. Worth a fortune to data center operators spending billions on infrastructure.
Six Chips, One Vision
Vera Rubin isn’t just a GPU. It’s six co-designed components working as a unified platform:
- Rubin GPU — the compute engine
- Vera CPU — custom Arm-based processor
- NVLink 6 Switch — GPU interconnect
- ConnectX-9 SuperNIC — network interface
- BlueField-4 DPU — data processing unit
- Spectrum-6 Ethernet Switch — rack-scale networking
Nvidia is treating the entire rack — not the individual server — as the fundamental unit of computation. Jensen Huang has been saying this for years. Vera Rubin is the clearest expression of that vision yet.
The platform uses HBM4 memory, critical for running trillion-parameter models. Memory supply remains globally tight, but Nvidia claims its supply chain is “in good shape” thanks to detailed forecasting with suppliers.
The Customer List Is Everyone
Meta, OpenAI, Anthropic, Amazon, Google, Microsoft — all reportedly committed to Vera Rubin. Meta plans deployment by 2027. Manufacturing spans the U.S., Taiwan, and a new Foxconn plant in Mexico.
Meanwhile, existing Blackwell demand remains insatiable. The big four hyperscalers — Alphabet, Amazon, Meta, Microsoft — accounted for just over 50% of data center revenue. Their combined 2026 capital expenditure could approach $700 billion.
The appetite for AI compute is effectively unlimited right now.
The Cracks in the Empire
Not everything glows green. Gaming revenue fell 13% sequentially to $3.7 billion. Reports suggest Nvidia may skip a new consumer GPU entirely this year, prioritizing memory allocation for the far more lucrative AI accelerator market. Gamers are not happy about it.
Geopolitics remain tricky. Nvidia excluded all Chinese data center revenue from Q1 guidance, reflecting ongoing U.S. export restrictions. China was once a meaningful market — its absence creates both a revenue gap and an opening for Huawei.
And the broader market context is complicated. This same week, AI stocks got rattled by the viral Citrini Research report painting a doomsday scenario where AI agents displace enough workers to destabilize the economy. IBM cratered after Anthropic showed Claude modernizing legacy COBOL systems. The same AI boom fueling Nvidia’s earnings is simultaneously terrifying investors about what it means for everyone else.
What This Actually Means
Nvidia’s results confirm what insiders have been saying: the AI infrastructure buildout is still early innings. When a company beats estimates by $2 billion on revenue and $5 billion on guidance, and the stock barely moves 2% after hours, the market has priced in extraordinary growth — and is still getting surprised.
Vera Rubin’s 10x efficiency improvement could extend the AI scaling curve in ways that matter. Energy costs and availability are increasingly the binding constraint on AI progress. If Vera Rubin delivers, data centers can do dramatically more AI work without proportionally more power — or the same power budget gets you 10x more capability.
For competitors, the message is brutal: the target keeps moving. AMD, Broadcom, and custom chip designers at Google and Amazon have made progress, but Nvidia raises the bar with each generation while maintaining its full-stack advantage.
For the rest of us, Nvidia’s earnings are a reminder that the AI revolution isn’t theoretical anymore. It’s a $68 billion-per-quarter, double-your-profit, two-ton-rack reality.
The question isn’t whether this is the most important technology buildout in history. It’s whether $700 billion a year in spending is the floor — or the ceiling.
Sources: CNBC, CNBC Vera Rubin exclusive, Yahoo Finance, SemiAnalysis