What happens when an AI company tells the most powerful military on Earth “no”?
We’re about to find out — and the answer lands Friday at 5:01 PM.
The Ultimatum
Defense Secretary Pete Hegseth delivered Anthropic a blunt message on Tuesday: abandon your self-imposed ethical red lines, or face the consequences. Those consequences aren’t subtle. We’re talking about the Defense Production Act — a Cold War-era law designed to compel companies to produce goods for national security — and a “supply chain risk” designation that would effectively blacklist Anthropic from all future government work.
Anthropic isn’t some peripheral player here. It’s the only AI company currently operating on classified military networks, thanks to its partnership with Palantir. Claude is already inside the Pentagon’s most sensitive systems.
And the Pentagon wants the guardrails off.
The Red Lines That Started Everything
Anthropic CEO Dario Amodei has been unusually direct about where he thinks AI shouldn’t go. In a January essay, he laid out two hard limits:
- No AI for domestic mass surveillance
- No fully autonomous weapons systems
“My main fear is having too small a number of ‘fingers on the button,’” Amodei wrote, “such that one or a handful of people could essentially operate a drone army without needing any other humans to cooperate.”
Hegseth’s position is simpler: if it’s legal, the military should be able to use it. Period. His January AI strategy document ordered all defense contracts to eliminate company-specific guardrails within 180 days, requiring “any lawful use” language.
That’s the chasm. Anthropic says some legal uses are still unethical. The Pentagon says ethics are for Anthropic to have opinions about, not to enforce through contract restrictions.
The Venezuela Trigger
The tensions were already simmering when the Maduro operation blew the lid off. Reports from the Wall Street Journal and Axios revealed that Claude was used — through Palantir — during the U.S. operation to capture Venezuelan President Nicolás Maduro.
Details are classified, but the damage was done when an Anthropic employee raised questions about the usage during a routine Palantir meeting. The Pentagon interpreted it as Anthropic trying to second-guess military operations.
That inquiry — framed as concern, not curiosity — created what sources described as “a rupture.” For defense officials, a contractor questioning an active operation is a non-starter.
Why the Defense Production Act Is a Nuclear Option
The DPA was passed in 1950 during the Korean War. It’s the law presidents invoke when they need factories building ventilators during pandemics or semiconductor fabs prioritizing chips for national security.
Using it to force an AI company to remove its own safety guardrails would be unprecedented.
Legal experts are already questioning whether it even applies. Anthropic isn’t refusing to produce anything — it’s refusing to allow certain uses of something it already produces. That’s a meaningful legal distinction.
But the threat itself may be the point. A senior Pentagon official told NBC News that Claude would be used by the military “if they want to or not.” This isn’t a negotiation. It’s an assertion of authority.
The “supply chain risk” label might be the real weapon. Being blacklisted would ripple through Anthropic’s commercial relationships — devastating timing for a company gearing up for its long-anticipated IPO. Investors get spooked when the federal government publicly calls you a security risk.
Anthropic Is Alone on This
Here’s the uncomfortable reality: every other major AI company has already caved.
OpenAI, Google, and Musk’s xAI have all agreed to the Pentagon’s “any lawful use” language. xAI just got approved for classified use this week, positioning it as a direct Claude replacement on sensitive military systems.
Anthropic’s $200 million Pentagon contract is significant but not existential for a company valued over $60 billion. Amodei has pointed out that revenue and valuation have actually grown since taking this stand. But that was before the DPA threat materialized.
The Irony: Anthropic Keeps Shipping
While the government sharpens its knives, Anthropic spent Monday casually terrorizing entire industries.
Claude Code’s COBOL modernization sent IBM shares tumbling 13% in a single day. An estimated 95% of U.S. ATM transactions still run on COBOL, and Anthropic claims AI can now handle the exploration work that made modernization prohibitively expensive. IBM’s massive legacy-maintenance business suddenly looks like a liability.
Then came Claude Cowork enterprise plug-ins for legal, finance, HR, and engineering. Partner stocks jumped — Salesforce up 4%, FactSet up 5%, DocuSign up 6% — while cybersecurity stocks continued their freefall.
The irony is sharp: the same week the government threatens to punish Anthropic, the company is demonstrating exactly why its technology is too important to blacklist.
What This Means for Everyone
This showdown sets the precedent for how governments interact with AI companies on safety and ethics — not just in the U.S., but globally.
If the DPA works: Companies don’t get to decide how their AI is used once the government is a customer. That chills safety-focused development across the entire industry. Why invest in responsible AI practices if the government can simply order you to remove them?
If Anthropic holds: It proves AI companies can maintain ethical boundaries against the most powerful customer imaginable. European regulators are watching. Other AI labs debating their own red lines are watching. Everyone is watching.
The most likely outcome is probably some face-saving compromise — expanded military use with the autonomous weapons and surveillance restrictions intact. But this administration hasn’t shown much appetite for compromise with companies it views as insufficiently cooperative.
The Bottom Line
We’re watching a tech company bet its future on the belief that there are things AI shouldn’t do, even when the law allows it. Whether you see Anthropic as principled or naive, this Friday’s deadline will define the relationship between AI companies and government power for a generation.
The question isn’t just whether Anthropic will cave. It’s whether the only guardrails on military AI should be legal ones — or whether the companies building these systems deserve a say in how they’re used.
Friday at 5:01 PM. We’ll be watching.