The $500 billion Stargate project was supposed to be the physical backbone of the AI revolution. Instead, it’s becoming a cautionary tale about what happens when chips evolve faster than concrete can cure.

Two Mega-Deals, Two Months, Two Collapses

OpenAI and Oracle just scrapped plans to expand their flagship data center in Abilene, Texas. Oracle had already spent billions on hardware, secured land, hired staff, and started construction on a 600-megawatt expansion. Then OpenAI walked.

The reason? Nvidia’s chips are evolving faster than data centers can be built.

Here’s the math that killed the deal: standing up a data center takes 12 to 24 months. Nvidia ships new chip generations every year. The Abilene expansion was designed around Blackwell processors, but by the time power comes online, Nvidia’s Vera Rubin architecture will be available — offering five times the inference performance.

Why lock into last year’s hardware when next year’s chips make it obsolete? Logical for OpenAI. Catastrophic for Oracle.

This wasn’t an isolated incident. A separate $100 billion OpenAI-Nvidia infrastructure deal also collapsed in February. Two major AI infrastructure agreements, dead in the span of a month.

Oracle Is the Canary in the Coal Mine

Unlike Google, Amazon, and Microsoft — which fund AI infrastructure from massive cash-generating businesses — Oracle is financing its buildout primarily with debt. Over $100 billion and counting.

The stock tells the story: down 23% year-to-date, more than 50% off its September peak. Oracle’s partner Blue Owl has declined to fund an additional facility. The company is reportedly preparing to cut up to 30,000 jobs. Investors are staring at a $50 billion capex plan paired with negative free cash flow.

But Oracle’s pain points to a systemic risk. GPU depreciation is a ticking time bomb for the entire AI infrastructure market. Every data center deal signed today carries the risk of committing to hardware that’s outdated before the power is connected. Build around Blackwell, and by the time you’re operational, your competitors are running Vera Rubin clusters that make yours look like a calculator.

The UK’s Phantom Data Centers

The cracks aren’t just American. A Guardian investigation exposed that the UK’s flagship AI infrastructure deals — many announced during Trump’s state visit last September — are largely illusory.

The most emblematic case: a site in Loughton, Essex, billed as “the largest UK sovereign AI datacentre” by the end of 2026. A year after the announcement, it was still a scaffolding yard. The company behind it, Nscale, only recently confirmed it had actually purchased the land — eight months after publicly claiming it already did. Planning permission? Still pending. Realistic opening? Mid-2027 at the earliest.

Future data center leases by the largest cloud companies are up 340% in two years, now topping $700 billion globally. That’s an extraordinary amount of money riding on the assumption that AI will supercharge economic productivity. Meanwhile, the UK just reported zero GDP growth for January.

The disconnect is staggering.

Silicon Valley’s “Good Bubble” Copium

Here’s where it gets fascinating. Rather than denying the bubble, Silicon Valley is embracing it.

Jeff Bezos calls AI a “good” kind of bubble. Sam Altman predicts AI will be a “huge net win” even if “a phenomenal amount of money” is lost. Hemant Taneja, CEO of General Catalyst, said it plainly: “Bubbles are good.”

Their favorite analogy is railroads. Yes, speculative railroad investment caused devastating depressions. But America got world-class freight infrastructure out of the wreckage. The dot-com crash was painful, but the fiber-optic cables laid during the frenzy became the backbone of the modern internet.

Even Mary Daly, president of the San Francisco Fed, has suggested AI is a “good bubble.” When the Federal Reserve is endorsing your bubble, we’re in uncharted territory.

The Railroad Analogy Has a Fatal Flaw

There’s a critical problem the bubble defenders tend to gloss over: railroad tracks last for decades. Fiber-optic cables are still serving data 25 years later.

AI chips become obsolete in 12 months.

This means the AI bubble might leave behind less useful infrastructure than previous tech manias — not more. When your hardware has a 12-month shelf life, you’re not building railroads. You’re building sandcastles at high tide.

OpenAI is currently worth more than Toyota, Coca-Cola, and Disney combined. Big Tech plans to spend $650 billion on AI infrastructure this year alone. US mega-cap AI spending is expected to hit $1.1 trillion between 2026 and 2029.

And yet, no AI company has presented a convincing business model that justifies these numbers.

What Actually Matters

The question isn’t whether there’s a bubble — that’s basically settled. The question is what happens when it pops.

If you’re in AI infrastructure: The chip upgrade cycle is simply too fast for traditional data center timelines. Companies building on last-generation hardware will get burned.

If you’re investing: The divergence between AI hype and AI fundamentals should concern you. The smartest money isn’t denying the bubble — it’s trying to be on the right side when the music stops.

If you’re building with AI: You’re probably fine. Even if the investment side crashes, the technology itself isn’t going anywhere. The internet survived the dot-com bust. AI models will keep improving. The companies building them might change dramatically, but the capability curve isn’t reversing.

The real risk isn’t that AI is overhyped. It’s that the financial infrastructure around it is fragile in ways we haven’t reckoned with. When your flagship $500 billion project starts shedding deals because chips go obsolete before the power grid is connected, that’s not a supply chain hiccup.

That’s a structural problem. And no amount of “good bubble” rhetoric changes the math.