The AI industry has a flair for the dramatic, but even by 2026 standards, Anthropic’s “Code with Claude” developer day in San Francisco was something else. In a single afternoon, the company announced it’s renting the entirety of Elon Musk’s Colossus 1 data center and unveiled a feature called “dreaming” that lets its AI agents review their own work and self-improve between sessions.
Both announcements signal very different but equally important shifts in the AI race.
220,000 GPUs From Its Biggest Critic’s Company
The headline number is staggering: Anthropic will use the full computing power of SpaceX’s Colossus 1 facility in Memphis, Tennessee. That’s more than 220,000 Nvidia GPUs and 300 megawatts of new capacity — roughly the power consumption of a small city.
This stacks on top of existing deals with Amazon (up to 5 gigawatts), Google and Broadcom (5 GW, coming 2027), Microsoft and Nvidia ($30 billion in Azure capacity), and a $50 billion infrastructure investment with Fluidstack.
The immediate payoff for users? Anthropic is doubling Claude Code’s rate limits for paid plans, removing peak-hour usage caps for Pro and Max accounts, and significantly increasing API rate limits for Claude Opus models.
From “Misanthropic” to “Good for Humanity”
Three months ago, Elon Musk was publicly calling Anthropic’s AI biased and predicting the company would inevitably become “misanthropic.” Now? After spending time with Anthropic’s leadership, Musk posted that their work to ensure Claude is “good for humanity” impressed him, adding, “No one set off my evil detector.”
Read between the lines. SpaceX is gearing up for an IPO, and a marquee AI customer like Anthropic on the books is compelling for investors. Meanwhile, xAI has shifted training to the newer Colossus 2, making Colossus 1 available capacity SpaceX can monetize.
Musk compared it to how SpaceX launches satellites for competitors — “fair terms and pricing.” It’s a business move dressed up as an ideological détente, and it’s smart for both sides.
What Is “Dreaming” and Why Should You Care?
The second announcement is arguably more interesting. Anthropic introduced “dreaming” — a feature for Claude Managed Agents that lets AI systems review their work between sessions, spot patterns, identify mistakes, and update memory files that store user preferences and context.
Think of it like your brain during sleep: consolidating memories, processing the day’s experiences, filing away what matters. Between active sessions, a dreaming agent can:
- Review recent workflows holistically — seeing patterns individual sessions miss
- Prune stale information from memory
- Merge duplicate notes and resolve contradictions
- Convert relative dates to absolute ones — so “yesterday’s decision” becomes “the May 5th decision”
This matters because one of the biggest limitations of current AI agents is their lack of persistent learning. Every session starts from scratch. Context windows are finite. Important details get lost. Dreaming is Anthropic’s answer — not by making the model itself learn, but by having the agent curate its own external memory.
Claude Prompting Claude: The Meta Shift
Boris Cherny, Anthropic’s head of Claude Code, made the vision explicit: “The default isn’t, ‘I’m going to prompt Claude Code.’ The default is now, ‘I will have Claude prompt Claude Code.’”
That’s a significant philosophical shift. Anthropic isn’t building a coding assistant — they’re building a system where AI agents manage other AI agents, learn from collective experience, and improve without human intervention. Add the new “routines” feature for scheduling automated Claude Code actions, and you’ve got the skeleton of a self-improving AI workforce.
This dovetails with Anthropic’s enterprise push. Yesterday brought ten new AI agents for financial services and insurance, plus partnerships with Blackstone and Goldman Sachs for a new AI services venture. Enterprise customers don’t just want smart AI today — they want AI that gets smarter the more it works with their specific codebase and processes.
Orbital Data Centers: Science Fiction Gets Serious
Buried in the announcements: Anthropic and SpaceX have “expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity.”
Space-based data centers. Multiple gigawatts. In orbit.
The logic is straightforward even if the engineering is daunting. Space offers essentially unlimited cooling, access to continuous solar power, and freedom from terrestrial grid constraints. As Flexential CEO Ryan Mallory noted, “The fact that serious companies are even discussing compute capacity in space tells you how aggressively the market is searching for power and scale.”
What This Means for the AI Race
Today’s announcements crystallize several 2026 trends:
The compute arms race enters a new phase. Anthropic’s commitments now span multiple cloud providers, custom chip architectures (AWS Trainium, Google TPUs, Nvidia GPUs), and possibly orbital infrastructure. Training a frontier model on a single cluster is ancient history.
Strange bedfellows are the new normal. Musk leasing compute to Anthropic while suing OpenAI. Google investing $40 billion in Anthropic while competing against them. The industry’s alliance structure looks less like traditional tech and more like Renaissance Italian city-states.
AI agents that learn are coming fast. Dreaming may be in research preview, but the trajectory is clear. Next-gen AI tools won’t just execute tasks — they’ll remember what worked, what didn’t, and adapt.
Coding is the killer app. Anthropic’s willingness to sign massive compute deals and double rate limits is driven primarily by Claude Code demand. OpenAI scaled back Sora to focus on coding. The market has spoken.
The Bottom Line
The SpaceX deal solves Anthropic’s immediate capacity problems and creates an unlikely but pragmatic alliance. Dreaming represents a genuine step toward AI agents that improve with use rather than starting fresh every time. And the orbital data center interest signals just how far companies will go for compute.
The lingering question: do self-improving AI agents represent progress or a new category of risk? If dreaming works as intended, we’ll have AI systems that learn and evolve based on their own assessment of what matters. Anthropic built in human review controls, but the direction is toward more autonomy, not less.
We’re either witnessing the birth of truly persistent AI intelligence — or sleepwalking into giving machines too much self-direction.