There’s a moment in every technology shift when the old rules stop applying and nobody’s figured out the new ones yet. We just hit that moment — and it’s happening in cybersecurity.

Over the past 48 hours, Anthropic’s Claude Mythos model cracked Apple’s five-year, billion-dollar M5 security architecture in five days, sent U.S. banks scrambling to patch exposed systems, and drew a direct challenge from Microsoft’s own multi-agent cybersecurity platform.

This isn’t theoretical anymore. AI just got really good at breaking things.

Apple’s Billion-Dollar Fortress, Five Days Flat

A cybersecurity startup called Calif used a preview version of Mythos to build a working kernel-level exploit against Apple’s Memory Integrity Enforcement (MIE) system on M5 silicon. The whole thing took five days.

MIE is Apple’s crown jewel. Built on Arm’s Memory Tagging Extension specification, it uses hardware-level detection to block memory corruption attacks before they execute. Apple spent half a decade developing it. Their own research claims it disrupts “every public exploit chain against modern iOS.”

The Calif team — researchers Bruce Dang, Dion Blazakis, and Josh Maine — found two vulnerabilities, chained them together, and went from an unprivileged local user to a root shell using only normal system calls. They’ve got a 55-page technical report they’re sitting on until Apple ships a fix.

The critical detail: Mythos didn’t just find the bugs. It assisted throughout the entire exploit development process. Once it “learned how to attack” the system, it became what the team called a remarkably effective collaborator. Not replacing human hackers — supercharging them. Compressing months of work into less than a week.

Banks Are in Full Panic Mode

While researchers were dismantling Apple silicon, the financial sector was dealing with its own Mythos crisis. According to Reuters, U.S. banks have been rushing to fix scores of IT system weaknesses flagged by Anthropic’s tool, prompting urgent software upgrades and raising the possibility of service disruptions for customers.

These weren’t theoretical vulnerabilities. They were real, exploitable weaknesses in production banking infrastructure — the kind that could lead to data breaches and financial theft if a malicious actor found them first.

Bruce Schneier put it bluntly: “Modern generative AI systems — not just Anthropic’s, but OpenAI’s and other open-source models — are getting really good at finding and exploiting vulnerabilities in software. And that has important ramifications for cybersecurity: on both the offense and the defense.”

Microsoft Fires Back with MDASH

Just when Mythos seemed like the only player, Microsoft dropped its own bombshell. Its new multi-agent system, MDASH, surpassed Mythos on a leading cybersecurity benchmark using more than 100 specialized AI agents working across multiple models.

Three things matter here. First, AI-powered vulnerability discovery isn’t a one-company trick — it’s proliferating fast. Second, Microsoft’s swarm approach could represent a more scalable architecture than single-model scanning. Third, the arms race is officially on.

OpenAI’s GPT-5.5 has been found comparable to Mythos in capability by the UK’s AI Security Institute. A company called Aisle reportedly reproduced Anthropic’s results with smaller, cheaper models. The genie isn’t just out of the bottle — it’s running laps.

The “Too Dangerous” Gambit

Anthropic’s decision to withhold Mythos from public release remains controversial. It’s only available through the Glasswing program to select companies scanning their own systems.

Schneier offered a sharp counterpoint: Mythos is expensive to run, and Anthropic may lack the resources for general release. “What better way to juice the company’s valuation,” he wrote, “than to hint at capabilities but not prove them, and then have others parrot their claims?”

Both sides have evidence. Mozilla used Mythos to find 271 vulnerabilities in Firefox — all patched, all permanently removed from the threat landscape. That’s defense working. But the exclusivity model has already sprung leaks: Bloomberg reported unauthorized users accessed Mythos through one of Anthropic’s vendors.

Tristan Harris framed the deeper question: “How do we live in a world where a private company suddenly has a skeleton key that can unlock the entire digital world with no government oversight or accountability?”

What Actually Changes

Here’s the uncomfortable math: AI vulnerability discovery is about to get cheap and accessible. Harvard researcher Fred Heiding summed it up: “The day of human pen testers and security experts are gone and that’s massive.”

For everyday users: Expect more frequent software updates. The flood of discovered vulnerabilities means patches will ship faster than ever. But unpatchable systems — older IoT devices, legacy industrial controls, that smart thermostat from 2019 — are now in significantly more danger.

For enterprises: AI security scanning is no longer optional. It’s table stakes. If you’re not using these tools to find your vulnerabilities, someone else will — and they won’t send you a courtesy report.

For the cybersecurity industry: The old model of expensive human penetration testers manually probing systems is being disrupted in real-time. The future is human-AI collaboration, with AI handling discovery and humans providing judgment.

The Optimistic Case (Yes, There Is One)

A world where AI systematically finds and patches vulnerabilities could actually be more secure than what we have now. Current security largely relies on attackers not having enough resources to find every bug. That’s security through obscurity, and it’s always been fragile.

Mozilla’s 271 Firefox patches are proof of concept. Defenders with the same automated discovery capabilities as attackers — proactively fixing hundreds of vulnerabilities before exploitation — is a fundamentally better security model.

The problem is the transition period. Right now, offensive capabilities are moving faster than defensive adoption. Banks are scrambling. Apple’s newest security fell in days. The technology is leaking beyond its intended boundaries.

The next 12 to 18 months will determine whether AI cybersecurity tools become primarily weapons or shields. The technology is neutral. The outcome depends on how quickly defenders adopt it — and whether governments step in with deployment frameworks before it’s too late.