The deadline is today. By 5:01 PM Friday, Anthropic must either hand over unrestricted access to Claude to the U.S. military — or face being labeled a national security risk and blacklisted from all government contracts.
Anthropic’s answer? No.
CEO Dario Amodei published a blog post late Thursday declaring that Anthropic “cannot in good conscience accede” to the Pentagon’s demands. The company is walking away from a $200 million defense contract rather than remove two guardrails: a ban on using Claude for mass domestic surveillance and a prohibition on fully autonomous weapons systems.
This isn’t a contract dispute. It’s the moment the AI industry has to answer a question it’s been dodging: who decides how the most powerful AI systems get used?
From Venezuela to the Ultimatum
The roots trace back to January, when the military used Claude during the operation to capture former Venezuelan President Nicolás Maduro. The AI was deployed through Anthropic’s partnership with Palantir, which runs Claude on the Pentagon’s classified networks.
When Anthropic found out, it did something that apparently infuriated Pentagon officials: it asked questions. Defense Secretary Pete Hegseth and others bristled at a tech company second-guessing military operations.
The tension escalated fast. On Tuesday, Hegseth summoned Amodei to the Pentagon with a blunt ultimatum: sign a document granting unrestricted access to Claude by Friday evening, or face consequences. Those consequences aren’t trivial — the Pentagon threatened to invoke the Defense Production Act, a wartime law that would give the president control over Anthropic’s resources, and to designate the company a supply chain risk.
Two Red Lines
Anthropic’s position is surprisingly narrow. The company isn’t refusing to work with the military — Claude is already on classified Pentagon networks. The two restrictions:
No mass domestic surveillance. In an era where AI can process vast amounts of communications, social media, and behavioral data at superhuman speed, Anthropic won’t let Claude be the engine powering broad surveillance of American citizens.
No fully autonomous weapons. Claude can’t be the sole decision-maker in lethal operations. AI models still hallucinate. They are not reliable enough for life-or-death targeting without a human in the loop.
Here’s the kicker: the Pentagon itself posted on X that it has “no interest” in using AI for mass surveillance or autonomous weapons. But when Anthropic asked for that in writing, the contract language offered 36 hours before the deadline allowed for “any lawful use” of Claude — effectively letting the military override both guardrails whenever it wanted.
If you don’t want to do the thing, why won’t you say so in writing?
The Contradiction Nobody Can Explain
Amodei spotted something sharp: the Pentagon’s two threats are “inherently contradictory.” One labels Anthropic a security risk. The other — the Defense Production Act — labels Claude as essential to national security.
Pick one. You can’t simultaneously argue a company is a threat to national security and that its product is so critical the government needs wartime emergency powers to commandeer it.
Dean Ball, former senior policy advisor for the White House Office of Science and Technology Policy, told Business Insider he’s “not aware of this ever having been used as a weapon in a negotiating posture.” Multiple AI policy experts have called the Pentagon’s approach “incoherent.”
The Industry Is Watching (and Mostly Quiet)
Anthropic is currently the only major AI company on the Pentagon’s classified networks. But OpenAI, Google, and xAI all received identical $200 million defense contracts last year.
A senior Pentagon official told CBS News that xAI’s Grok is “on board with being used in a classified setting.” The implication is obvious: if Anthropic won’t play ball, someone else will.
This creates an agonizing dynamic. If Anthropic walks and competitors rush in without guardrails, the military gets unrestricted AI anyway — and the one company that tried to impose limits loses its seat at the table.
It’s the classic prisoner’s dilemma, except the prisoners have nuclear-capable AI.
Why You Should Care
If you’re an American citizen: The mass surveillance restriction matters directly to you. AI-powered surveillance at scale isn’t theoretical — the capability exists today. The only thing preventing it is policy, and policy can change with a contract revision.
If you work in AI: Every company is now watching what happens to Anthropic. If they get crushed for standing their ground, the message is clear: compliance over conscience. Good luck recruiting safety researchers after that.
If you care about global stability: If the U.S. deploys fully autonomous AI weapons without meaningful human oversight, every other military power follows. The arms race dynamics of autonomous weapons keep defense experts up at night for good reason.
What Happens at 5:01 PM
A few scenarios:
The Pentagon blinks and agrees to the two restrictions in writing. Unlikely given the public posture, but stranger things have happened.
Anthropic gets blacklisted. The company said it would “work to enable a smooth transition to another provider.” Remarkably gracious for a company getting shown the door.
The government invokes the Defense Production Act — unprecedented and almost certainly triggering legal challenges. Using wartime powers against a domestic AI company over contract terms would be a dramatic escalation.
Or the most likely outcome: a face-saving compromise where the Pentagon gets expanded access with vaguely worded limitations and Anthropic claims its red lines held.
Whatever happens today, one thing is clear. The era of AI companies quietly doing whatever the government asks is over. The stakes are too high, the capabilities too powerful, and the potential for misuse too real.
Dario Amodei bet his company on the idea that building AI safely isn’t just good ethics — it’s good strategy. By end of business today, we’ll know if anyone agrees.
Sources: Washington Post, Politico, Business Insider, CBS News, Axios, The Guardian