The people actually building the most powerful AI systems on Earth stood on the same stage in New Delhi this week. What they said should keep you up at night — not because they agree, but because they don’t.

At the AI Impact Summit 2026, Sam Altman, Demis Hassabis, Sundar Pichai, and Dario Amodei each laid out their vision for what’s coming. The timelines range from two years to a decade. The disagreement itself is the story.

Altman: More Intelligence in Data Centers Than Human Brains by 2028

OpenAI’s CEO dropped the summit’s biggest bomb: “By the end of 2028, more of the world’s intellectual capacity could reside inside data centers than outside them.”

Read that again. He’s not talking about AI that passes exams or writes code. He’s claiming that within two years, the aggregate intelligence running on silicon could exceed what 8 billion human brains collectively produce.

Altman pointed to the acceleration curve as evidence. AI has gone from struggling with high school math to producing novel results in theoretical physics. India is now OpenAI’s fastest-growing market for Codex. Over 100 million Indians use ChatGPT weekly, a third of them students.

Then came the kicker: a call for an “IAEA for AI” — international oversight modeled on nuclear energy governance. When the person building the thing asks for nuclear-level regulation, the appropriate response is not dismissal.

Hassabis: We’re Brilliant and Broken at the Same Time

Google DeepMind’s CEO offered the intellectual counterweight. He puts AGI — a system with all human cognitive capabilities including creativity and long-term planning — at five to ten years out.

More importantly, he named the specific problems still unsolved. Current AI systems are “jagged intelligences”: they win gold medals at the International Math Olympiad but fumble elementary arithmetic when questions are phrased differently. Hassabis identified three critical gaps:

  • No continuous learning after deployment
  • No coherent long-term planning
  • Fundamental inconsistency across similar tasks

His proposed AGI test is elegant: train a model with a 1911 knowledge cutoff and see if it independently derives general relativity by 1915, as Einstein did. “It’s much harder to come up with the right question than to solve the conjecture,” Hassabis said. “Today’s systems clearly would not be capable of doing that.”

Despite the caution, his stakes estimate is staggering: 10x the impact of the Industrial Revolution, happening at 10x the speed. A century of transformation compressed into a decade.

Pichai: Who Actually Benefits?

Google’s CEO pivoted from “when” to “who.” His central concern: the digital divide becoming an AI divide.

“We cannot allow the digital divide to become an AI divide,” Pichai said, backing it with concrete commitments — four new subsea fiber optic cables between India and the US, a full-stack AI hub in Andhra Pradesh with gigawatt-scale compute, and part of Google’s $15 billion India infrastructure pledge.

It’s a question the AI industry has largely dodged. It took a summit in the Global South to make it central. When you’re debating whether AGI arrives in 2028 or 2033, it’s easy to forget that most of the planet still can’t reliably deploy current-generation AI.

Amodei: The Gap Nobody Talks About

Anthropic’s CEO delivered the summit’s most underrated insight. While everyone else debated timelines, he pointed at the elephant in the room: capability doesn’t equal impact.

“There is this duality between the fundamental capabilities of the technology and the time that it takes for those capabilities to diffuse into the world,” Amodei said. “There are just frictions to adopt things through enterprises, and even more so in the developing world.”

We spend enormous energy debating whether the next model will be “AGI-level.” We spend almost none on why most businesses still can’t deploy even current AI effectively. The bottleneck isn’t intelligence. It’s integration, infrastructure, and institutional inertia.

The Governance Void

Every speaker acknowledged that AI governance needs to go global. Nobody offered a concrete mechanism beyond Altman’s IAEA analogy and India’s New Delhi Frontier AI Commitments framework.

Here’s the tension nobody resolved: the companies building frontier AI want just enough governance to create legitimacy without enough to slow them down. Altman simultaneously called for democratization — “the only fair and safe path forward” — while warning about AI-enabled bioweapons. He rejected “effective totalitarianism in exchange for a cure for cancer” while building systems that, by his own admission, could outperform any CEO within years.

These aren’t contradictions born of hypocrisy. They’re the genuine complexity of the moment. But governance frameworks built on summit speeches and voluntary commitments have a track record. Ask anyone who followed climate COPs for two decades.

Three Things This Summit Actually Revealed

The timeline debate is narrowing. Even the most cautious major lab CEO puts AGI within a decade. The most aggressive puts superintelligence within two years. The window of “this is speculative” is closing.

The geography of AI is shifting. Hosting this in India — with 100+ countries participating — signals that AI governance can’t be a US-EU duopoly. India’s 1.4 billion people and massive developer population make it a key player whether Silicon Valley acknowledges it or not.

The capability-impact gap is the real story. We’re building systems that reason about physics but can’t be reliably deployed in a mid-size company’s workflow. Closing that gap — not topping the next benchmark — is where real value gets created.

The AI Impact Summit 2026 was the moment the industry’s leaders stood before the world and said, with varying degrees of confidence: this changes everything, and we don’t fully know how to manage it.

Whether anyone was really listening is another question entirely.


Sources: Business Insider, Economic Times, Hindustan Times, The Tribune, Sunday Guardian