On Sunday, OpenAI and Microsoft announced the end of their exclusive partnership. By Monday, OpenAI’s models were live on Amazon Web Services. The speed tells you everything about how long this was in the making.

This isn’t a cloud provider swap. It’s OpenAI declaring independence — while juggling missed revenue targets, an $852 billion valuation, a looming IPO, and a trial against Elon Musk happening simultaneously.

What Actually Changed

On April 27, OpenAI and Microsoft jointly announced a restructured deal:

  • Microsoft’s license to OpenAI IP is now non-exclusive — it was exclusive before
  • Microsoft keeps a license to OpenAI models and products through 2032
  • OpenAI pays Microsoft a 20% revenue share until 2030, now subject to a cap
  • Microsoft no longer pays a revenue share to OpenAI — the flow is one-directional now
  • OpenAI is free to sell on any cloud provider

This was a negotiated separation, not a messy divorce. Microsoft keeps OpenAI tech in Copilot and Azure. OpenAI unlocks the massive enterprise market running on AWS.

As OpenAI’s revenue chief Denise Dresser wrote internally: the Microsoft relationship “has also limited our ability to meet enterprises where they are — for many that’s Bedrock.”

AWS Didn’t Waste a Single Day

AWS CEO Matt Garman hosted a launch event in San Francisco on Tuesday with three new offerings on Amazon Bedrock:

OpenAI Models on Bedrock — Including GPT-5.5, released just last week. AWS customers access these alongside Anthropic’s Claude, Meta’s Llama, and others through the same Bedrock interface.

Codex on Bedrock — OpenAI’s coding agent, now at 4 million weekly users, runs natively on AWS. Organizations can configure it to use Bedrock as its provider, with usage counting toward existing AWS commitments.

Amazon Bedrock Managed Agents — The most interesting piece. A new service built around OpenAI’s reasoning models, enabling enterprises to build agents with persistent memory, multi-step workflows, and tool use — with AWS handling orchestration, governance, and security.

All three launch in limited preview with general availability coming in weeks.

Sam Altman couldn’t attend — he was across the Bay Bridge in Oakland, in court for the Musk trial. He sent a recorded message instead.

The Revenue Problem Nobody Wants to Talk About

The same day as the AWS launch, the Wall Street Journal reported that OpenAI has missed multiple internal targets for users and revenue.

The company targeted one billion weekly active ChatGPT users by end of 2025. It hasn’t hit that. Monthly revenue targets earlier this year were missed too, reportedly due to competitive pressure from Anthropic in coding and enterprise, and Google’s Gemini surge.

This matters because OpenAI raised $122 billion in March 2026 — the largest private tech fundraise in history — at an $852 billion valuation. With an IPO coming, growth needs to accelerate, not stall.

The AWS deal starts looking less like strategic expansion and more like a necessary survival move. OpenAI can’t afford to leave the majority of enterprise cloud customers on the table.

Altman and CFO Sarah Friar called the WSJ report “ridiculous.” The numbers will speak for themselves soon enough.

The $600 Billion Backdrop

This reshuffling happens against a staggering number: Big Tech’s four major hyperscalers are on track to spend a combined $600 billion on AI infrastructure in 2026.

Wall Street is getting nervous. As earnings season kicks off — Amazon reports Q1 results this week — investors want receipts. When does all this spending generate proportional returns?

The OpenAI-AWS deal partly answers that pressure. For Amazon, offering OpenAI models gives Bedrock customers fewer reasons to leave. For OpenAI, AWS’s enterprise customer base is a growth lever it desperately needs.

Everyone Is Hedging Now

Zoom out and the realignment is striking:

  • OpenAI → Now on AWS and Oracle, plus Azure
  • Microsoft → Building agent offerings powered by Anthropic’s Claude
  • Amazon → Hosting OpenAI, Anthropic, Meta, and its own models on Bedrock
  • Anthropic → Partnered with AWS, Google Cloud, and now Microsoft

The era of exclusive AI partnerships is dead. Every major player is diversifying to avoid dependence on a single partner. When today’s dominant model could be tomorrow’s also-ran, exclusivity is a liability.

For enterprises, this is straightforwardly good news. More choice, more competition, and the ability to pick the best model for each use case without being locked into one cloud ecosystem.

The Practical Takeaway

Model access just leveled up. You no longer have to choose between AWS infrastructure and OpenAI’s models. The walls between AI providers and cloud platforms are coming down.

Agent infrastructure is the new battleground. AWS didn’t just add OpenAI models — they launched a managed agents service. The industry consensus is forming: the next wave of AI value isn’t chatbots, it’s autonomous agents doing real work.

Watch the spending. $600 billion in AI capex is unprecedented. If returns don’t materialize, expect a correction that ripples through the entire tech sector.

What Comes Next

The OpenAI-AWS partnership represents a maturing industry where pragmatism replaces hype-fueled exclusivity. OpenAI needs revenue growth. AWS needs the best models. Microsoft needs freedom to work with Anthropic. Everyone benefits from optionality.

But the missed revenue targets cast a long shadow. OpenAI’s $852 billion valuation assumes hockey-stick growth that isn’t materializing on schedule. The AWS expansion opens doors, but the fundamental question remains: can any AI company justify these astronomical valuations?

Amazon earnings drop this week. Microsoft follows soon after. And in an Oakland courtroom, Sam Altman is fighting a different battle entirely. The next few months will reveal whether this industry is building on bedrock — or sand.