Every few months someone claims AI is going to “accelerate science.” It usually means a marginally better protein fold. This week is different. This week, an AI system took a quantum algorithm that its own human authors had nearly thrown in the trash, rewrote it, and handed back a result that made the people who secure most of the internet publicly move up their doomsday clock by six years.
If you read one AI story this week, make it this one.
What actually happened
On April 7, two resource-estimate papers landed almost simultaneously — one from Google’s quantum team, one from Oratomic, a neutral-atom quantum startup co-founded by Harvard’s Dolev Bluvstein. Resource-estimate papers don’t build a code-breaking quantum computer. They ask: how big would one have to be?
Last year’s answer: millions of physical qubits. Decades away. Comfortable.
New answer: Oratomic says Shor’s algorithm could break 256-bit elliptic curve cryptography with roughly 10,000 reconfigurable neutral-atom qubits. The headline trick: a logical qubit encoded in just three atoms, versus the 100–1,000 atoms per logical qubit the field has been assuming. That’s a ~100x reduction in the physical footprint of a cryptographically dangerous machine.
Elliptic curve cryptography is not niche. It’s what signs your WhatsApp messages, your TLS handshakes, your SSH logins, your Bitcoin transactions. If you can break it in days instead of “longer than the age of the universe,” every “harvest now, decrypt later” archive that some intelligence agency has been quietly filling up suddenly has a usable shelf life.
The AI twist: the algorithm the humans quit on
Here’s the part that makes this an AI story and not just a quantum one.
Oratomic co-author Robert Huang told TIME the team’s key algorithms were initially “about 1,000 times worse” than what they needed. His words: “This whole thing would not work.” They were ready to walk away.
Instead, Huang fed the problem to OpenEvolve, an open-source tool that wraps frontier LLMs — Google’s Gemini and Anthropic’s Claude — in an evolutionary loop. You give it a scoring function and a seed program. It mutates, evaluates, selects, iterates. A very expensive genetic algorithm where the mutation operator has read every physics paper on arXiv.
Huang’s expectation going in: “I didn’t expect you would find anything useful.”
What he got back: the AI combined results from niche sub-disciplines of quantum error correction “in a novel way,” tried thousands of variations, and handed him an algorithm that worked. Not 1,000x worse. Good enough to publish. Good enough to brief the U.S. government about before publication.
Dolev Bluvstein’s line should be pinned on every AI-safety slide deck for the next year:
“There is no question that we used AI to accelerate this development. No question at all.”
And from John Preskill — the guy who coined “quantum supremacy”:
“I’m surprised by how much we were able to reduce the qubit count.”
Preskill was careful to say humans were still primary drivers, asking the right questions. Fair. But “asking the right questions” is doing an enormous amount of work in a paper whose central result is a 100x reduction that the humans, on their own, had decided was impossible.
This is the thing Altman and Amodei have been promising for years: AI-compressed scientific timelines. Easy to wave away when the example is “Claude suggested a new prompt for a materials database.” Much harder when the example is “Claude + Gemini found the algorithmic improvement that broke a seven-year assumption about when RSA dies.”
Cloudflare’s 2029 deadline
On April 7, Cloudflare — which sits in front of 20–30% of global web traffic — published a revised post-quantum roadmap. New target: the entire Cloudflare product suite, including post-quantum authentication, fully migrated by 2029.
The previous anchor for the industry was NIST’s 2035 deadline. Google moved to 2029 on March 25. Cloudflare has now followed. Two of the most consequential infrastructure companies on the public internet are quietly agreeing that NIST’s timeline is no longer safe.
Bas Westerbaan, Cloudflare’s lead researcher, to TIME: “It’s a real shock. We’ll need to speed up our efforts considerably.”
Cloudflare is already ahead — roughly 65% of their traffic flows over post-quantum key exchange thanks to ML-KEM in Chrome. The missing piece is authentication: the signatures that prove a server is who it says it is. Key exchange protects future traffic; signatures protect the present moment of trust. Without PQ signatures, a quantum-equipped attacker can impersonate any website in real time. That’s what Cloudflare now wants done by 2029.
Six years. For a planet-scale migration of every TLS cert, every root CA, every HSM, every embedded device sold with a 20-year firmware lifespan. Good luck.
The skeptic’s corner
Credit where due: the Oratomic paper is not peer-reviewed yet.
Jeff Thompson, Princeton physicist and Logiqal CEO, told TIME that many of Oratomic’s assumptions are “untested.” His critique is sharp: “It’s very easy to reduce the size of the computer if you just assume better qubits.” Translation: the 100x improvement may partly be a tradeoff — fewer atoms, but each atom has to behave better than anything demonstrated in a lab.
Legitimate objection. Resource estimates are famously elastic; change one noise parameter and your 10,000-qubit machine becomes 5 million again.
But Cloudflare isn’t moving on one paper. It’s moving because Google independently arrived at a similar place, because the trendline across multiple estimates is steeply downward, and because the cost of being wrong is every secret ever sent over the internet. When your threat model is “all of modern cryptography,” you don’t wait for peer review.
And the meta-point survives the skepticism intact: the AI found an algorithmic improvement that the humans had given up on. That’s independent of whether the assumed error rates are optimistic. The “AI as scientific accelerator” story doesn’t need Shor-at-10k-qubits to land in 2029 to be real. It just needs a moment where a physicist says, “I was going to quit, and the model found it.” We are now firmly in that moment.
What this means for the rest of us
Four things, in order of how much you should care:
- “Harvest now, decrypt later” stops being theoretical. If you transmit anything with a 5+ year shelf life — medical records, trade secrets, source code, diplomatic cables — your threat model changed this week.
- Bitcoin is the canary. Addresses that have ever exposed their public key are protected only by the 256-bit elliptic curve discrete log problem Oratomic just targeted by name. There is no clean migration path. Expect a loud debate in the next 18 months.
- Vendor questions just got sharper. “What’s your post-quantum migration plan?” is now a reasonable RFP question. If the answer is “NIST says 2035,” that answer is out of date.
- For the AI industry, this is the rare good-news story with teeth. Not a benchmark. Not a demo. A publishable scientific result, credited on the record, from a co-author who didn’t expect it to work.
The “AI does science” era quietly arrived
We’ve been waiting for a clean example of “AI made a discovery the humans wouldn’t have made alone.” The Oratomic paper is about as clean as these things get. The tool is open source. The models are named. The humans gave up; the AI didn’t. The result shipped.
The reflex move is to swerve into either “AGI is here” or “nothing ever happens.” Neither is right. The right read is narrower: LLM-driven program synthesis, wrapped in an evolutionary search loop, is now good enough to contribute original work inside a field as technically unforgiving as quantum error correction. Not “write the paper.” Not “understand the paper.” Contribute a load-bearing technical improvement inside the paper.
That’s new. That’s the thing we were told was 5–10 years away.
The first externally visible consequence of AI doing science is that some other timeline just got compressed. This week it’s quantum cryptography. Next time it might be drug discovery, or materials, or fusion. The pattern to watch isn’t “AI invents X.” It’s “infrastructure people suddenly start moving deadlines forward.”
That’s happening this week. In public. From companies that don’t move deadlines for fun.
Bluvstein’s own verdict, laughing: “The world is currently, in my view, not prepared.”
He’s right.