The Pentagon banned Anthropic’s products from the federal government barely a month ago. Now the White House is handing agencies the keys to Anthropic’s most dangerous model.

Welcome to AI policy in 2026, where a six-week grudge can’t survive contact with a model that finds thousands of zero-day vulnerabilities before breakfast.

From Persona Non Grata to Essential Asset

Here’s the whiplash timeline. In early March, the Department of Defense slapped Anthropic with a supply chain risk designation — a label historically reserved for Chinese companies like Huawei. The move followed a bitter fight over military AI use that reportedly came to a head hours before U.S. strikes on Iran. Anthropic refused unrestricted military deployment. The Pentagon responded with the bureaucratic nuclear option.

A D.C. Circuit court upheld the ban in early April. Anthropic was effectively dead in Washington.

Then Mythos happened.

The Model That Changed the Calculus

Claude Mythos isn’t just another frontier model. It’s a cybersecurity earthquake. During testing, Mythos identified thousands of previously unknown zero-day vulnerabilities across every major operating system, browser, and critical software platform. The kind of bugs that nation-state hackers build entire operations around.

Anthropic’s own response was telling: they refused to release it publicly. Not because it wasn’t ready. Because it was too capable.

Instead, they created Project Glasswing — a controlled disclosure program giving limited access to Nvidia, Microsoft, Google, Apple, and financial giants like JPMorgan Chase. The Bank of England held urgent cybersecurity discussions after previewing the model. The crypto industry is scrambling for access, terrified about what Mythos means for cryptographic security.

This isn’t marketing hype dressed up as caution. This is institutional fear from people who don’t scare easily.

The White House Memo

On Tuesday, Gregory Barbaccia — the federal chief information officer at the White House Office of Management and Budget — emailed Cabinet department officials across Defense, Treasury, Commerce, Homeland Security, Justice, and State. The subject line was blunt: “Mythos Model Access.”

The message: OMB is working with Anthropic, other industry partners, and the intelligence community to establish guardrails before rolling out a modified version to agencies.

The Commerce Department’s Center for AI Standards and Innovation had already been testing Mythos capabilities before Anthropic even acknowledged the model existed. Staff from at least two other federal agencies reached out to Anthropic directly, Pentagon ban be damned. Three congressional committees held or requested briefings.

The gravitational pull of a model this powerful overrides institutional politics. When your government’s software is riddled with vulnerabilities that one AI can find and patch — or exploit — you don’t get to play grudge games.

Opus 4.7: The Civilian Version

While the government sorts out Mythos access, Anthropic released Claude Opus 4.7 on April 16 — their most capable generally available model. Significant upgrades in software engineering, vision, and agentic tasks. Same pricing: $5 per million input tokens, $25 per million output.

But here’s the tell: Anthropic explicitly stated they “experimented with efforts to differentially reduce” Opus 4.7’s cyber capabilities compared to Mythos. The model ships with automatic detection and blocking of high-risk cybersecurity requests.

Opus 4.7 is deliberately nerfed where Mythos excels. It’s the civilian model. Mythos is the one keeping CISOs up at night.

Security professionals doing legitimate pen testing can apply to Anthropic’s new Cyber Verification Program for less restricted access.

The Tension That Defines the Next Decade

This story captures something bigger than one model or one policy reversal. We’ve crossed a line where frontier AI models are strategic national security assets, not productivity tools with better autocomplete.

The 2026 Stanford AI Index drives this home. AI data centers worldwide now draw 29.6 gigawatts — enough to power New York state at peak demand. The U.S. and China are nearly tied on model performance. Leading AI companies have stopped disclosing training details. As USC’s Yolanda Gil put it in the Stanford report: “We don’t know a lot of things about predicting model behaviors.”

Mythos crystallizes every tension at once. It’s simultaneously a massive cybersecurity risk and a massive cybersecurity asset. Built by a company the Pentagon considers a threat but the White House now considers essential. Advancing faster than any policy framework designed to manage it.

What Comes Next

The federal rollout isn’t finalized. Anthropic declined comment. The White House would only confirm it “continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities.”

But the trajectory is obvious. A modified, guardrailed version of Mythos will reach federal agencies within weeks. The political obstacles that looked immovable a month ago are crumbling under the weight of what this model can do.

The harder question: if Mythos can find thousands of zero-days today, what does the next generation find? And who controls that?

We’re watching a new kind of arms race take shape — one where the weapon and the shield are the same technology, built by the same company. The biggest fight isn’t between nations. It’s between departments of the same government.