The future of AI warfare isn’t a hypothetical anymore. It’s running live ops in Iran.
Admiral Brad Cooper, head of U.S. Central Command, confirmed this week that the military is actively using “a variety of advanced AI tools” in Operation Epic Fury — the massive air campaign against Iran that’s struck over 5,500 targets since February 28. AI helped hit 1,000 targets in the first 24 hours alone.
At the center of it all: Palantir’s Maven Smart System, with Anthropic’s Claude baked in. The same AI that summarizes your emails is now helping analysts prioritize strike targets in an active war zone.
And here’s the kicker — Anthropic doesn’t want it there.
The Anthropic Paradox
This is the most absurd governance failure in AI history.
Anthropic explicitly told the Pentagon its models shouldn’t power autonomous weapons or mass surveillance. The Pentagon responded by labeling Anthropic a “supply chain risk” — effectively blacklisting the company. Anthropic sued.
Meanwhile, Claude keeps running inside Palantir’s Maven system. Anthropic can’t turn it off. The model was already integrated through a third-party contractor, and once that handoff happened, the developer lost control.
Pentagon spokeswoman Kingsley Wilson made the power dynamic clear: “America’s warfighters will never be held hostage by unelected tech executives and Silicon Valley ideology.”
So the company that pushed hardest for ethical guardrails in military AI now has its technology deployed in exactly the scenario it tried to prevent. If you wanted a single story that captures why AI governance is broken, this is it.
“Humans Make the Final Call” — But Do They?
Cooper stressed that humans always make the final decision on what to strike. That sounds reassuring until you do the math.
One thousand targets in 24 hours. That’s roughly one target every 86 seconds, around the clock, with zero breaks. How thorough is human review at that pace?
Israel’s experience in Gaza is instructive. The IDF’s Lavender and Gospel AI systems were also classified as “decision support.” Investigations later revealed operators spent an average of 20 seconds reviewing each AI-generated target before approving. That’s not oversight. That’s rubber-stamping an algorithm.
The fundamental tension is baked into the value proposition. The whole point of AI targeting is speed — moving faster than humans can think. But if the system’s advantage is outpacing human cognition, you’ve already undermined the premise that humans are meaningfully in control.
1,300 Dead. A Girls’ School in Rubble. Silence From the Pentagon.
The human cost is mounting. At least 1,300 people have been killed since the campaign began. Iranian officials report nearly 20,000 civilian buildings destroyed, along with 77 healthcare facilities, schools, markets, and a water desalination plant.
The most haunting incident: a bombing of a girls’ school in southern Iran that killed more than 170 people, mostly children.
When Futurism asked the Pentagon whether AI was used to select that school as a target, they got bounced to CENTCOM. CENTCOM declined to comment on specific targeting decisions.
That silence tells you everything you need to know about the current state of accountability in AI-enabled warfare: there isn’t any.
Congress Wants Answers. Will It Get Them?
Lawmakers on the House Armed Services Committee are pushing back.
Rep. Jill Tokuda (D-Hawaii) called for “a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran.”
Rep. Sara Jacobs (D-Calif.) cut deeper: “AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them.”
Both are right, but congressional oversight moves at congressional speed. The bombs are falling now.
The Global Arms Race Just Shifted Into High Gear
Every military on Earth is watching Operation Epic Fury and taking notes.
China issued a warning against “excessive military use of AI” — dripping with strategic irony given Beijing’s own massive investments in military AI. But the substance matters: there are no binding international agreements governing AI in warfare. The UN has debated autonomous weapons for years with nothing to show for it.
The message from Epic Fury is unambiguous: AI-enabled targeting lets you strike at a pace and scale that was previously impossible. Any military without similar capabilities is already behind.
NVIDIA’s GTC conference next week adds another layer. The same chips powering chatbots and image generators flow through companies like Palantir into military targeting systems. The AI supply chain is dual-use by default, and nobody has figured out how to separate the commercial pipeline from the military one.
The Precedent Is Set
Trump reportedly told Axios there’s “practically nothing left to target” in Iran. The campaign may wind down soon. But the precedent will outlast it by decades.
For the first time, the U.S. military has publicly confirmed and championed AI in large-scale combat targeting. The questions that remain:
- When an AI-recommended strike kills civilians, who’s responsible? The developer? The operator? The commander?
- Can AI developers maintain any ethical guardrails once models ship through third-party contractors?
- Will Congress legislate before the next conflict, or after?
- Will the military ever disclose which strikes were AI-selected versus human-planned?
Anthropic’s lawsuit could set landmark precedent on some of these. But litigation is slow and wars don’t wait for court rulings.
We’re watching the first major AI-enabled military campaign in history unfold in real time. The technology works exactly as advertised — which is precisely what makes it terrifying. “Working” and “working ethically” are diverging at the speed of an algorithm, and nobody has a plan to bring them back together.