An American AI company just got the treatment usually reserved for Chinese tech firms tied to foreign adversaries. The Pentagon officially designated Anthropic — maker of Claude, darling of the AI safety movement — a “supply chain risk to America’s national security.”
The crime? Refusing to let the military use its AI without restrictions on mass surveillance and autonomous weapons.
Welcome to the new era of AI politics, where building safety guardrails gets you blacklisted by your own government.
A $200 Million Deal Implodes
The backstory reads like a corporate thriller. Anthropic had a $200 million Pentagon contract. Claude was already integrated into the military’s classified Maven intelligence system through Palantir. According to the Washington Post, Maven — powered partly by Claude — was used in recent military strikes on Iran.
Then the Department of Defense demanded a contract modification allowing “all lawful uses” of Claude. Anthropic CEO Dario Amodei drew two lines: no domestic mass surveillance of American citizens, no fully autonomous weapons systems. He wanted those prohibitions in writing.
The Pentagon refused. Defense Secretary Pete Hegseth slapped Anthropic with the supply chain risk label on social media. Trump piled on, telling Politico: “Well, I fired Anthropic. Anthropic is in trouble because I fired them like dogs.”
Subtle.
OpenAI Grabs the Contract — With a Catch
OpenAI CEO Sam Altman swooped in within days, announcing his own military deal. The twist: Altman claimed the agreement contained the exact same two restrictions Anthropic had fought for — no mass surveillance, no autonomous weapons — plus a third ban on “social credit”-style automated decisions.
So how did OpenAI get the deal Anthropic couldn’t?
The difference was framing, not substance. Anthropic wanted explicit contractual language. OpenAI agreed the Pentagon could use its tech for “any lawful purpose” while separately building technical safeguards into its models through what it called a “multi-layered approach.”
Amodei wasn’t impressed. In a leaked internal message, he called OpenAI’s deal “safety theater” and Altman “mendacious,” adding: “The main reason they accepted and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”
From the other side, Altman admitted internally that OpenAI would have “no control over how the military used” its technology. That admission alone validates everything Amodei said.
What ‘Supply Chain Risk’ Actually Means
The initial announcement sounded apocalyptic: no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. Corporate death sentence.
Reality is narrower. Anthropic’s lawyers — and critically, Microsoft’s lawyers — concluded the designation only prevents Claude from being used as a direct part of military contracts, not all business with companies that happen to also have defense work.
Microsoft confirmed publicly: Claude stays available through M365, GitHub, and AI Foundry for non-defense work.
But the chilling effect is real. Anthropic’s pending $60 billion funding round is reportedly in jeopardy. Defense contractors like Lockheed Martin are expected to rip out Anthropic’s AI. And the precedent — weaponizing a national security designation against a domestic company over a contract dispute — has the entire tech industry on edge.
Silicon Valley Circles the Wagons
In a rare moment of industry unity, the Information Technology Industry Council — Amazon, Nvidia, Apple, and yes, even OpenAI among its members — sent a letter to Hegseth expressing concern over using a supply chain risk designation “in response to a procurement dispute.”
The logic is obvious: if the government can blacklist any AI company that insists on usage restrictions, no company’s terms of service mean anything. Today it’s Anthropic. Tomorrow it’s whoever says “no” next.
Amazon CEO Andy Jassy personally called Amodei. Venture capital firms Lightspeed and Iconiq are working back channels in the Trump administration. Even investors who wish Anthropic had just signed the deal understand the precedent is poison for everyone.
Quiet Negotiations Resume
Despite the public fireworks, there are signs of a thaw. Amodei has reportedly resumed talks with Pentagon official Emil Michael. The two “strongly dislike one another,” which bodes well for honest diplomacy if nothing else.
Amodei publicly walked back his leaked memo, calling it an “out-of-date assessment” written on a hard day. He’s clearly leaving room for a deal.
The Pentagon has reason to negotiate too. Claude is already embedded deep in military intelligence systems. Ripping it out for OpenAI would be disruptive, expensive, and potentially create exactly the kind of capability gap that national security designations are supposed to prevent.
The Question That Won’t Go Away
Strip away the personalities and politics, and this fight comes down to one question: who decides how AI is used?
Anthropic says companies building these systems have the right — even the responsibility — to set limits. The Pentagon says the military needs unfettered access to critical technology, and private companies shouldn’t “insert themselves into the chain of command.”
Both have a point. Neither is going away.
What makes this moment unprecedented is the government’s willingness to reach for nuclear-option regulatory tools against an American company. A supply chain risk designation for a contract dispute isn’t proportional. It’s a warning shot aimed at every tech company that might consider saying “no” to Washington.
The irony writes itself: the AI company most associated with safety research is being punished for actually practicing it.
Sources: The Guardian, TechCrunch, Reuters, Fortune, NYT, BBC