On February 27, 2026, the Trump administration blacklisted Anthropic — banned it from every federal contract — for refusing to let the Pentagon use its AI without restrictions on autonomous weapons and mass surveillance. That same evening, Sam Altman posted on X that OpenAI had just signed a deal to deploy its models on the Pentagon’s classified network. Hours earlier, OpenAI had closed the largest private funding round in history: $110 billion.

One company said no and got destroyed. The other said yes and got everything.

The Deal the Pentagon Wanted All Along

Anthropic had been the first AI lab on the Pentagon’s classified network. When the contract came up for renewal, the Department of War wanted unrestricted access across “all lawful use cases.” Anthropic drew two lines: no mass surveillance of Americans, no fully autonomous weapons. Talks collapsed. Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security” — a label normally reserved for foreign adversaries like Huawei. Trump ordered every federal agency to purge Anthropic’s technology immediately.

Then OpenAI walked in.

Altman claims his company secured the exact same restrictions Anthropic was fighting for. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Read that again. The Pentagon allegedly accepted from OpenAI the same terms it refused from Anthropic.

Same Red Lines, Different Treatment

This is the part nobody can explain cleanly.

If the Pentagon was willing to accept restrictions on autonomous weapons and surveillance — the exact restrictions Anthropic demanded — then why was Anthropic blacklisted? Why the nuclear option of a supply-chain risk designation?

A few possible reads:

It was never about the terms. Government officials had been publicly attacking Anthropic for months as “overly concerned with AI safety.” The contract dispute may have been the excuse, not the cause. Anthropic’s CEO Dario Amodei hadn’t played the Washington game. Altman had — attending events, building relationships, positioning OpenAI as a willing partner.

It was about leverage. Blacklisting Anthropic first made OpenAI the only game in town. When you eliminate the competition before negotiations start, you don’t need to haggle.

It was a message. To every AI company watching: cooperate first, negotiate second. The safety terms might be identical on paper, but the lesson is about posture, not policy.

Altman, to his credit, seemed to feel the awkwardness. He publicly called for the Pentagon to offer the same terms to all AI companies and said he wanted things to “de-escalate away from legal and governmental actions.” Whether that’s genuine concern or smart PR after winning is up to you.

$110 Billion Buys a Lot of Silence

The funding round makes the Pentagon deal look almost like a footnote. Almost.

Amazon led with $50 billion. Nvidia put in $30 billion. SoftBank added another $30 billion. The round values OpenAI at $730 billion — more than double its valuation from eleven months ago. To put $110 billion in perspective: it’s roughly the GDP of Morocco. It’s nearly three times OpenAI’s previous record-setting raise.

But here’s the detail that matters: up to $35 billion of Amazon’s investment may be contingent on OpenAI either achieving AGI or completing an IPO by year-end. That’s not just a bet on a company. That’s a bet on a civilizational milestone, with a financial exit as the backup plan.

The infrastructure commitments are measured in gigawatts now. Three gigawatts of Nvidia inference capacity. Two gigawatts of AWS Trainium compute. We’ve gone from “how many GPUs” to “how many power plants.”

The Day AI Became a Political Weapon

Zoom out and the picture is ugly.

An American AI company got banned from all federal business — not for a security breach, not for fraud, not for selling secrets to China — but for insisting on restrictions around autonomous weapons. The company that stepped over its body got the contract and the biggest check in private-company history on the same day.

This isn’t an AI safety story anymore. It’s a political power story. The precedent is set: AI policy in America will be shaped by who plays ball with the administration, not by who builds the safest technology.

Google’s DeepMind employees are already writing internal letters opposing military AI work. After watching what happened to Anthropic, how many of those letters do you think will get sent?

What OpenAI Actually Won

Let’s be specific about what February 27 gave OpenAI:

  • $110 billion in capital and infrastructure — enough to outspend any competitor for years
  • Pentagon classified network access — the ultimate government credibility stamp
  • Elimination of its closest rival from the entire federal market
  • An IPO narrative that now includes defense contracts, massive infrastructure partnerships, and a $730 billion valuation

Anthropic got a court fight and a designation that puts it in the same category as foreign threats to national security. It says it’s “deeply saddened” and will challenge the ruling.

The Question That Won’t Go Away

Altman says OpenAI’s safety principles are in the contract. Maybe they are. Maybe the Pentagon will honor them. Maybe “human responsibility for the use of force” means something enforceable and not just a line in a PDF nobody reads.

But here’s what we know for certain: the company that refused to proceed without guaranteed safety restrictions got blacklisted. The company that showed up ready to deal got the contract on terms that may or may not be identical. We’re trusting the word of the winner.

The question isn’t whether AI will be used for defense. That’s been settled for years. The question is whether safety guarantees negotiated in the shadow of a competitor’s destruction are worth anything at all.


Sources: TechCrunch, CNBC, Reuters, NPR, Bloomberg