The U.S. military just took its biggest step toward becoming an “AI-first fighting force.” On May 1st, the Department of Defense announced agreements with eight AI companies — SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, AWS, and Oracle — to deploy frontier AI models on the Pentagon’s most sensitive classified networks.

But the real headline? The company that’s not on the list: Anthropic.

What Actually Happened

The agreements give all eight companies access to deploy AI on the Pentagon’s Impact Level 6 and IL7 network environments — the most sensitive classified systems where war plans get made and intelligence gets analyzed.

The Pentagon’s AI platform, GenAI.mil, has already been used by over 1.3 million DoD personnel, generating tens of millions of prompts and deploying hundreds of thousands of agents in just five months. Those are staggering adoption numbers for any enterprise, let alone the world’s largest bureaucracy.

The official goal: “streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments.”

Translation: AI helps commanders process intelligence faster, identify targets, and make split-second decisions in the fog of war.

Why Anthropic Got Frozen Out

Anthropic had a unique position — Claude was the first and only frontier AI model on classified networks. Then everything imploded over three words: “any lawful use.”

The Pentagon wanted Anthropic to agree its technology could be used for any lawful purpose. Anthropic wanted explicit guardrails — no fully autonomous weapons, no mass surveillance of Americans. Defense Secretary Pete Hegseth’s position was simple: the Pentagon determines what’s lawful, contractors comply.

The escalation was unprecedented:

  • Anthropic refused the clause
  • The Trump administration severed ties
  • The Pentagon designated Anthropic a “supply chain risk” — a label previously reserved for companies linked to foreign adversaries like China
  • Anthropic sued the federal government
  • A federal judge blocked the supply chain risk designation last month

Think about that supply chain label for a second. The Pentagon tried to treat an American AI company the way it treats Huawei. That’s not a contractual dispute — it’s a warning shot to every tech company considering pushback.

The Companies That Said Yes

The list tells a story.

OpenAI has been aggressively chasing government contracts — a dramatic pivot from the company that once worried about AI safety enough to withhold GPT-2 from public release.

Google deepened a Pentagon relationship that would’ve been unthinkable in 2018, when employees revolted over Project Maven. That feels like ancient history now.

SpaceX adds another dimension to Elon Musk’s growing government entanglement. Its inclusion makes sense for satellite intelligence, but the optics of a close Trump ally landing classified AI contracts deserve scrutiny.

Reflection AI is the wildcard — a two-year-old startup seeking a $25 billion valuation without a public model. It’s backed by Nvidia and 1789 Capital, the venture fund where Donald Trump Jr. is a partner. Make of that what you will.

Microsoft, AWS, Nvidia, and Oracle provide the computing backbone. No surprises there.

The $54 Billion Question

The Pentagon requested $54 billion for autonomous weapons development alone. The One Big Beautiful Bill Act earmarked substantial funding for AI and offensive cyber ops. But the announcement was vague about specifics.

Likely use cases include target identification from surveillance feeds, intelligence synthesis across massive datasets, logistics optimization, offensive and defensive cyber operations, and predictive threat assessment.

Helen Toner at Georgetown’s CSET raised the critical question: “How do you roll out these tools rapidly for strategic advantage while recognizing you need to train operators and make sure they don’t over-trust them?”

Nobody answered.

The Gaza Shadow

This doesn’t happen in a vacuum. Israel’s war in Gaza saw U.S. tech companies quietly provide AI tools for target tracking. Civilian casualties soared, fueling fears that AI-powered targeting contributed to innocent deaths.

Every signing company accepted the “any lawful use” clause, meaning the Pentagon — not the AI companies — decides where the ethical lines are. Given this administration’s record on oversight, that should concern everyone.

Is Anthropic’s Gamble Paying Off?

Despite the freeze-out, Anthropic may not be finished. CNN reported the White House reopened discussions recently, partly driven by Anthropic’s Mythos — a cybersecurity tool so powerful it can both find vulnerabilities and provide attack roadmaps. CEO Dario Amodei visited the White House last month.

Pentagon officials believe signing with rivals will pressure Anthropic back to the table. Classic leverage: show a holdout that the world moves on without them.

But here’s the thing — Anthropic proved it’s possible for a major AI company to say no to the Pentagon and survive. The federal judge’s injunction demonstrated legal checks still exist. Whether the stand is principled or strategic positioning, Anthropic is the only major AI company that drew a line.

The Uncomfortable Reality

Strip away the corporate drama: the United States military is explicitly transforming into an AI-first fighting force. Eight of the world’s most powerful AI companies will deploy their most advanced models on the most sensitive classified networks in existence. Over a million personnel already use AI daily.

This isn’t a pilot program. This is the new reality of American military power — and it happened with remarkably little public debate.

The questions we should be asking aren’t about which companies signed. They’re about what happens when AI models that hallucinate, exhibit biases, and can be adversarially manipulated are embedded in systems making life-and-death decisions at the speed of war.

Anthropic’s refusal might look like obstinance today. In five years, when we’re evaluating how AI transformed warfare, it might look like the last company that asked the right questions before it was too late.