There’s a new unicorn in AI — and it’s literally in orbit.

Starcloud, a Redmond-based startup building solar-powered data centers in space, just announced a $170 million raise at a $1.1 billion valuation. Led by Benchmark and EQT Ventures, the round brings total funding to $200 million and launches what might be the most audacious infrastructure play in AI history.

But Starcloud isn’t alone up there. SpaceX has filed FCC plans for up to one million orbital data center satellites. Blue Origin is circling the same idea. Nvidia is providing the chips. And here’s the kicker — Starcloud already has an H100 GPU running Google’s Gemma model in orbit right now.

Why Earth Isn’t Enough

The math driving this is brutally simple: AI is eating the power grid alive.

Global data center electricity consumption is projected to more than double by 2030. In the U.S., AI-related energy demand has triggered political backlash — from Bernie Sanders calling for data center moratoriums to local communities blocking new facilities over water usage and grid strain.

Meanwhile, permitting for new terrestrial data centers takes years. Environmental reviews, water rights, grid interconnection queues — bottlenecks are multiplying faster than capacity can be built.

Space solves several of these problems at once. In low Earth orbit, solar irradiance is roughly 40% higher than on Earth’s surface — no atmosphere filtering the sunlight. Cooling becomes cheaper: the vacuum of space allows radiative cooling without water. And there’s no NIMBY opposition 500 kilometers up.

Starcloud CEO Philip Johnston told Reuters that orbital data centers could achieve 10x lower energy costs than terrestrial counterparts. That’s not a typo.

88,000 Satellites and Counting

Starcloud’s vision is staggering in scale. The company plans an 88,000-satellite constellation forming a distributed data center network in orbit. Their long-term target: a 5-gigawatt facility with solar and cooling panels measuring roughly 4 kilometers on each side.

They’re not just dreaming — they’re executing. In November 2025, Starcloud launched its first satellite carrying an Nvidia H100 chip, 100x more powerful than any GPU previously sent to space. The satellite successfully trained NanoGPT on Shakespeare’s complete works, then ran Google’s Gemma LLM — the first time a large language model operated on a high-powered GPU in orbit.

“Greetings, Earthlings!” the orbital Gemma instance wrote back. “I’m here to observe, analyze, and perhaps, occasionally offer a slightly unsettlingly insightful commentary.”

Next launch: October 2026, featuring AWS Outposts infrastructure — literally putting Amazon Web Services in space. They’re also working with Nvidia and Google Cloud on future deployments.

Their investor list reads like a who’s who: Andreessen Horowitz, In-Q-Tel (the CIA’s venture capital arm), Benchmark, and EQT Ventures. When the intelligence community’s VC fund backs your orbital compute play, the signal is clear — this is a national security bet as much as a tech one.

The SpaceX Problem

If Starcloud is the scrappy startup, SpaceX is the 800-pound gorilla that just entered the ring.

In January 2026, SpaceX filed with the FCC for a constellation of up to one million satellites designed as orbital data centers. The filing came after SpaceX’s acquisition of Elon Musk’s xAI, merging rocket expertise with AI ambition into a vertically integrated entity.

SpaceX described the system as “a high-bandwidth, optically linked constellation of solar-powered satellites with unprecedented computing capacity to power advanced AI workloads.” With Starship’s dramatically lower launch costs and SpaceX’s proven satellite manufacturing at Starlink scale, the structural advantages are hard to match.

Blue Origin has expressed similar ambitions with fewer public details. When the world’s two richest men converge on the same infrastructure thesis, it’s not a niche bet.

The Reality Check

Let’s be honest about the challenges.

Latency is fundamental. Light takes 3-4 milliseconds from LEO to ground — one way. For real-time applications, that’s an unavoidable penalty. Orbital compute is best suited for batch processing, AI training, and asynchronous inference rather than consumer-facing chatbots.

Cooling isn’t as easy as it sounds. In vacuum, you can only radiate heat — no convection, no water cooling. Radiating heat from power-dense GPU clusters requires enormous thermal panels, and the physics scales unfavorably.

Maintenance is essentially impossible. Terrestrial data centers need constant hands-on attention. In orbit, a hardware failure means a dead satellite.

Space debris is a growing concern. Adding tens of thousands of satellites to an already crowded environment raises collision risks. Even researchers are scrambling to model the impact — “in our literal spare time,” as one scientist told Ars Technica, because the companies aren’t funding the analysis.

Launch costs remain high. Even at SpaceX’s optimistic Starship projections of $10-20 million per flight, orbital deployment requires hundreds of launches and tens of billions in transportation costs.

Starcloud believes falling launch costs will make orbital compute cost-competitive by 2028 or 2029. Aggressive — but the trend line on launch costs has consistently beaten predictions.

What This Actually Means for AI

Strip away the sci-fi aesthetics and orbital data centers address a genuine bottleneck: AI’s insatiable hunger for compute is running into physical limits on Earth.

The most likely near-term use case is massive-scale AI training runs. Training frontier models requires weeks of continuous computation across thousands of GPUs. Latency matters less when you’re grinding through trillions of tokens. If orbital facilities deliver significantly cheaper energy for these marathon runs, the economics could work even with space operations overhead.

There’s also a sovereignty angle. Countries without massive domestic energy grids could lease orbital compute capacity without building terrestrial infrastructure. In-Q-Tel’s investment hints at the national security implications — orbital compute is inherently harder to attack, sanction, or regulate than ground-based facilities.

And the environmental argument is real. If AI’s energy consumption doubles regardless, pushing some demand to solar-powered orbital facilities reduces pressure on terrestrial grids and avoids the water consumption that makes data centers so controversial in drought-prone regions.

The Bottom Line

Let’s keep perspective. Starcloud has one satellite with one GPU. SpaceX has a filing. We’re at the Wright Brothers stage, not the 747 stage.

But when Benchmark, Andreessen Horowitz, and the CIA’s venture fund are writing checks — when SpaceX and Blue Origin are filing constellation plans and Nvidia is designing chips for orbital deployment — the orbital compute thesis has graduated from “interesting thought experiment” to “active infrastructure buildout.”

The $1.1 billion question isn’t whether AI compute will move to space. It’s how much, how fast, and who captures the value when it does.