The enterprise world’s favorite new toy and its worst new nightmare are the same thing: agentic AI.
OutSystems’ 2026 State of AI Development report just dropped the numbers, and they’re staggering. 96% of organizations are already using AI agents — not experimenting, not piloting, using. And 97% are exploring system-wide agentic strategies that would embed autonomous decision-making into core operations.
Here’s the catch: 94% of those same organizations say AI sprawl is increasing complexity, technical debt, and security risk across their enterprises.
We went from “should we adopt AI agents?” to “we can’t stop adopting them and we’re losing control” in about eighteen months.
From Copilots to Autonomous Agents
The leap from generative AI to agentic AI is bigger than most people realize. ChatGPT-era tools were smart assistants — you ask, they answer, you decide. Agentic AI is fundamentally different. These systems autonomously execute workflows, make decisions, and adapt in real time. An AI agent can monitor your supply chain, reroute orders when a supplier goes dark, update your ERP, and notify stakeholders — all without a human touching anything.
Gartner predicted last August that 40% of enterprise apps would feature task-specific AI agents by the end of 2026, up from under 5% in 2025. That estimate was probably conservative. OutSystems found that 49% of the 1,900 global IT leaders surveyed already describe their agentic capabilities as “advanced” or “expert.”
The Sprawl Nobody Planned For
The first AI agents were manageable. One team, one agent, one purpose. Easy to monitor, easy to control.
Then they multiplied.
A SANS Institute report from this same week found that 74% of organizations are already running AI agents or automations that need their own credentials. Non-human identities — service accounts, API keys, automation bots — have grown by 76% across surveyed organizations. In many enterprises, they’ve quietly doubled or tripled.
Each agent needs access permissions. Each one makes autonomous decisions. And unlike traditional automation that follows fixed logic, agentic AI interprets instructions and can take unpredictable actions. SANS Institute described them as behaving like “an over-privileged insider, but operating at machine speed.”
This isn’t just an operational headache. Forrester has warned that an agentic AI deployment will cause a publicly disclosed data breach before the end of 2026. That feels less like prediction and more like countdown.
The Governance Gap
The most alarming finding isn’t adoption speed — it’s the near-total absence of governance frameworks to match it.
SANS found that 92% of organizations fail to rotate machine credentials on a 90-day cycle, fearing it might break service accounts. Fifty-nine percent rotate fewer than half their non-human credentials quarterly. And a genuinely terrifying 5% don’t even know if they’re running agentic AI in their organization at all.
Five percent of surveyed organizations cannot tell you whether autonomous AI systems are operating inside their infrastructure.
Most enterprises are deploying agents across fragmented environments — different teams spinning up different agents on different platforms with different access levels and zero unified oversight. As SANS instructor Richard Greene put it: “We’ve already seen what happens when non-human identities scale without guardrails, and agentic AI is moving even faster.”
Human-on-the-Loop: The Middle Ground
There’s one encouraging signal. About 52% of organizations now use a “human-on-the-loop” model — AI agents operate autonomously within defined guardrails, while humans maintain supervisory control and can intervene at critical decision points. It’s the enterprise version of “trust but verify.”
The problem is scale. When hundreds or thousands of agents are making decisions per second across cloud, DevOps, and SaaS systems, manual governance processes collapse. What worked for ten agents fails completely at a thousand.
The Regulatory Split
Europe is ahead. The EU AI Act requires that high-risk AI risk management be an ongoing, evidence-based process built into every deployment stage. In practice, this means enterprises operating in Europe need a comprehensive registry of every AI agent in operation — each uniquely identified, with documented capabilities and permissions.
Technical solutions are emerging. The Python SDK Asqav can cryptographically sign each agent’s action and link records to an immutable hash chain — blockchain-style verification for AI governance. It sounds like overkill today and will seem obvious in six months.
The United States, meanwhile, continues its patchwork approach. The White House AI framework from March 2026 tried to create “one rulebook” to preempt state regulation, but it’s lighter on the granular agent-governance requirements that the EU mandates.
What Actually Works
The organizations getting this right — mostly financial services and tech companies — share a common approach: build governance alongside capability, not as an afterthought.
OutSystems CEO Woodson Martin framed it well: “The challenge is no longer just about adoption, but about creating a stable architectural foundation that can coordinate these complex intelligent systems to drive real-world productivity.”
Start small. Prove value. Build the controls as you build the agents. It’s not exciting advice, but it’s the only advice that doesn’t end with a breach disclosure.
The Bottom Line
The speed at which agentic AI has gone from concept to near-universal adoption has outpaced every framework we have for managing it. The next twelve months will determine whether enterprises can build governance structures fast enough to match the agents they’ve already deployed.
The companies that figure this out — centralized agent registries, automated credential rotation, human-on-the-loop oversight at scale, cryptographic audit trails — won’t just avoid breaches. They’ll capture the productivity gains that agentic AI actually promises.
The rest will become case studies in what happens when you deploy autonomous systems faster than you can control them.
Here’s the question worth asking at your next board meeting: How many AI agents are running inside your organization right now — and can anyone actually tell you what they’re all doing?