The AI boom isn’t slowing down. It’s accelerating so fast that even Nvidia’s own forecasts can’t keep up.

On Wednesday, Nvidia reported fiscal Q4 2026 results that didn’t just beat Wall Street estimates — they made them look quaint. Revenue hit $68.13 billion against expectations of $66.21 billion. Net income nearly doubled year-over-year to $43 billion. And then Jensen Huang’s team dropped a next-quarter forecast of $78 billion that left analysts scrambling to update their models.

But the earnings weren’t even the headline. That belongs to Vera Rubin — Nvidia’s next-generation AI system that shipped its first samples to customers this week and promises a tenfold leap in efficiency over Blackwell.

Welcome to the part of the AI story where the numbers stop making intuitive sense.

The Numbers That Broke the Spreadsheet

Let’s get the vitals out of the way:

  • Q4 Revenue: $68.13B (vs. $66.21B expected)
  • EPS: $1.62 adjusted (vs. $1.53 expected)
  • Data center revenue: $62.3B — up 75% YoY, now 91% of total sales
  • Net income: $43B, nearly 2x the $22.1B from a year ago
  • Full-year revenue: $193.7B, up 68% for fiscal 2026
  • Next-quarter guidance: $78B (±2%), crushing the $72.6B consensus

Nvidia’s quarterly revenue is now larger than the annual revenue of most Fortune 500 companies. That’s not a typo. That’s the state of AI infrastructure spending in 2026.

“Customers are racing to invest in AI,” Huang said on the earnings call. The four major hyperscalers — Alphabet, Amazon, Meta, and Microsoft — could spend a combined $700 billion on capex this year. Nvidia remains their primary supplier.

Vera Rubin: 10x Efficiency in a Two-Ton Rack

While earnings dominated the ticker, the real forward-looking story walked out of Nvidia’s Santa Clara headquarters in the form of a nearly two-ton rack stuffed with 1.3 million components.

Vera Rubin packs 72 Rubin GPUs and 36 Vera CPUs into a single rack-scale system — 1,300 total microchips compared to Grace Blackwell’s 864. The headline spec: 10x more performance per watt than its predecessor.

In a world where energy consumption is the single biggest constraint on AI scaling, that’s not incremental. That’s potentially game-changing.

Nvidia also redesigned the hardware for serviceability. Each superchip slides out of the rack’s 18 compute trays in seconds — a far cry from Blackwell, where components are soldered to the board. Data center operators will appreciate not needing a soldering iron for maintenance.

The customer list is exactly who you’d expect: Meta (deploying Vera Rubin by 2027), OpenAI, Anthropic, Amazon, Google, and Microsoft. CFO Colette Kress confirmed first samples shipped this week.

The Half-Trillion Dollar Forecast Was Too Low

Here’s the line that should make competitors nervous. Kress told analysts:

“We expect sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity we shared last year.”

Nvidia is saying its own half-trillion-dollar forecast was too conservative. When your guidance embarrasses your guidance, you’re in rarefied territory.

Networking revenue tells a similar story — up 263% year-over-year to $11 billion. NVLink interconnects and Spectrum-X Ethernet switches are the plumbing connecting hundreds of GPUs in massive AI clusters, and demand is exploding.

Why the Stock Barely Moved

Here’s the paradox of being the most important company in tech: even crushing expectations doesn’t move the needle when your market cap is $4.8 trillion and the stock is priced for perfection.

Nvidia shares popped 4% in after-hours trading, then faded to less than 1% gain. The stock is still down about 7% from its October high. “I’m just worried about how much upside from here they can actually get,” said Chris Roland at Susquehanna — even while calling the guidance “a monster.”

When excellence is the baseline, outperformance gets taken for granted. That’s the tax on dominance.

The Cracks in the Armor

Even amid record-smashing results, constraints are visible:

Memory shortages remain a real headwind. Global high-bandwidth memory (HBM) supply can’t keep pace with AI demand, forcing Nvidia to make hard choices. The gaming division — once the company’s identity — saw revenue drop 13% sequentially to $3.7 billion. Analysts speculate Nvidia may skip launching a new gaming GPU this year entirely, redirecting scarce memory to higher-margin AI accelerators.

Competition is getting serious. AMD’s MI-series accelerators are improving. Broadcom is making gains with custom silicon. Google’s TPUs keep getting better. Nvidia dominates today, but a $500B+ opportunity attracts serious contenders.

The doomsday backdrop. Days before Nvidia’s earnings, a viral Citrini Research report painted an AI doomsday scenario — mass white-collar unemployment by 2028, collapsing software companies, “Occupy Silicon Valley” protests. The S&P dropped over 1%. Software stocks like Datadog, CrowdStrike, and Zscaler each fell more than 9%.

The tension is real and unresolvable: Nvidia’s results prove companies are spending unprecedented sums on AI capability, while the market is simultaneously pricing in anxiety about what happens when that capability actually works. Two sides of the same coin.

What This Means

For businesses: AI infrastructure is getting cheaper and more efficient with every generation. Vera Rubin’s 10x efficiency gain means the cost per inference will keep dropping. If you’ve been waiting for the right economics, they’re arriving.

For workers: The companies buying Vera Rubin racks aren’t doing it for decoration. They’re doing it because AI is becoming good enough to automate tasks that previously required human judgment. The scale of this investment is the warning signal.

For investors: Nvidia remains the picks-and-shovels play of the AI buildout. But at current valuations, the risk-reward is more nuanced than two years ago. The Vera Rubin cycle is the next catalyst — GTC 2026 on March 16th will fill in the details.

What’s Next

Jensen Huang takes the stage at Morgan Stanley’s TMT conference on March 4th, followed by the main event at GTC in San Jose on March 16th. Expect the full Vera Rubin roadmap, updates on Nvidia’s $500 billion US manufacturing commitment through 2029, and — knowing Huang — at least one surprise nobody saw coming.

We’re in the middle of the largest infrastructure buildout since the internet. Nvidia is supplying the shovels. Whether this buildout creates the future AI optimists promise or the disruption pessimists fear is the $193.7 billion question — and the answer is probably both.