The United States Department of Defense just did something it has never done before: it officially designated an American company a “supply chain risk to national security.” The company? Anthropic — maker of Claude, one of the most capable AI systems on the planet.

This label was designed for foreign adversaries. Companies with backdoors in their hardware. Firms controlled by hostile intelligence services. It’s been used publicly exactly once before, against a Swiss cybersecurity firm with reported Russian ties.

Now it’s being slapped on a San Francisco AI lab because it refused to let the military use its technology for fully autonomous weapons and mass domestic surveillance.

Two Red Lines, $200 Million on the Table

The backstory is straightforward. Anthropic signed a $200 million DOD contract in July 2025 and deployed Claude on classified military networks. It worked. By all accounts, the Pentagon loved it.

Then Defense Secretary Pete Hegseth issued a memorandum requiring all DOD AI contracts to adopt “any lawful use” language — no restrictions, no carve-outs, no exceptions. Anthropic pushed back with two conditions: no fully autonomous weapons, no mass surveillance of Americans.

Two restrictions. Neither had affected a single government mission. But the principle was the problem. The Pentagon’s position: no vendor gets to “insert itself into the chain of command.”

Hegseth met personally with Anthropic CEO Dario Amodei. He reportedly threatened to invoke the Defense Production Act — Cold War-era emergency powers. When the deadline passed, the administration chose a different weapon.

The Nuclear Option

On February 27, Hegseth posted on X directing the DOD to designate Anthropic a supply chain risk. Hours later, Trump posted on Truth Social ordering “EVERY Federal Agency” to “IMMEDIATELY CEASE” using Anthropic’s tech. In an interview, he said he “fired Anthropic like dogs.”

The formal notification arrived March 5.

Here’s where it gets absurd.

The Contradiction Nobody Can Explain

Hours after the Pentagon officially blacklisted Anthropic, the U.S. and Israel launched military strikes in Iran. The AI supporting those operations? Claude. The same system just declared a national security threat.

“OK, wait a minute, they’re a really dangerous player for U.S. national security, so you’re going to use them for another six months? Huh?” asked Herbert Lin, a senior research scholar at Stanford’s Center for International Security and Cooperation.

The designation includes a six-month transition period. Not exactly the timeline you’d expect for a genuine security threat. More like the timeline you’d expect for a political punishment.

Michael Horowitz of the Council on Foreign Relations called it “especially notable,” saying “there’s no clearer signal” of how much the Pentagon actually values Claude.

Politics in a National Security Costume

Multiple experts aren’t buying the security framing. “This feels to me like a dispute that is about politics and personalities,” Horowitz told CNBC.

The receipts are damning. David Sacks — White House AI czar and venture capitalist with investments in competing AI firms — previously accused Anthropic of supporting “woke AI” and running a “sophisticated regulatory capture strategy based on fear-mongering.”

Amodei didn’t attend Trump’s inauguration. He hasn’t donated to the administration. He reportedly declined to offer what he described internally as “dictator-style praise.”

And the timing of OpenAI’s response? Hours after Anthropic was blacklisted, Sam Altman announced his own Pentagon deal, praising the DOD’s “deep respect for safety.” Opportunism doesn’t get more textbook.

Legal analysts are giving Anthropic strong odds in court. Lawfare — one of the most respected national security law publications — concluded that “the Pentagon’s Anthropic designation won’t survive first contact with the legal system.”

The problems stack up fast:

The logic is circular. The government argues Claude is so vital it can’t tolerate any restrictions — while simultaneously claiming Claude poses such a grave risk the entire federal government must stop using it. Pick one.

The procedure was rushed. The statute requires consultation with procurement officials, written determinations with three mandatory findings, and congressional notification. Going from a meeting with Amodei to a formal designation in three days doesn’t leave room for any of that.

The scope is wrong. Hegseth declared no DOD contractor may conduct “any commercial activity with Anthropic.” But 10 USC § 3252 is a Defense Department procurement statute. It doesn’t reach other agencies. It can’t bar defense contractors from using Claude for non-military work.

The Chilling Effect Is the Point

This isn’t really about Anthropic. It’s a message to every AI company: if you want government contracts, you accept unrestricted use. No ethical guardrails. No terms of service. No saying no.

Anthropic was founded specifically to build AI safely. It pioneered Constitutional AI. The two restrictions it fought for are exactly what safety researchers have advocated for years. If the company that literally exists to build safe AI gets punished for maintaining safety boundaries, what does that tell every other lab?

A group of retired defense officials and policy leaders wrote to Congress defending Anthropic and calling the designation a “dangerous precedent.” They’re right. Today it’s AI restrictions. Tomorrow it’s any company that declines a government demand.

What Happens Next

Talks between Anthropic and the DOD have reportedly resumed. Claude is still running on classified networks. The legal challenge is coming.

The most likely outcome is a compromise — narrower language around autonomous weapons, maybe a review board for edge cases. The designation itself is probably too legally broken to survive court.

But the damage is already done. Other AI companies got the message. The chilling effect on safety advocacy in the defense space will outlast this specific fight.

The question worth sitting with: should AI companies have any ability to set limits on military use of their technology? Or does the government get unrestricted access, no questions asked?

This week proved it’s no longer a thought experiment.