AI Agents Can Work Faster Than Humans and Fail Harder Too

3 weeks ago 1

Internet attack design concept.

AI agents promise efficiency but risk chaos if over-permissioned. Smarter boundaries—not more autonomy—will determine how safely enterprises scale AI.

getty

AI agents are no longer confined to research labs or developer sandboxes—they’re moving into production. Across industries, they’re writing code, reconciling invoices, managing infrastructure, and even approving transactions. The promise is efficiency and speed. The peril is that most of these systems still rely on human-oriented permission models that can’t safely govern autonomous behavior.

I’ve continued to beat the drum all year about how AI is transforming cybersecurity, identity, and the modern enterprise. The simple reality is that whenever a new technology accelerates capability, it also amplifies risk. Agentic AI is the latest—and perhaps sharpest—example. Machines can act faster than people, but they can also fail harder.

When human trust models meet machine speed

Traditional access control frameworks were built around human rhythms. Users log in, complete tasks, and log off. They make mistakes, but they do so slowly enough for controls to catch up. AI doesn’t operate on that timescale. Agents act continuously, across multiple systems, and without fatigue.

That’s why Graham Neray, co-founder and CEO of Oso Security, calls authorization “the most important unsolved problem in software.” As he put it when we spoke, “Every company that builds software ends up reinventing authorization from scratch—and most do it badly. Now we’re layering AI on top of that foundation.”

The problem isn’t intent—it’s infrastructure. Most companies are trying to teach new systems to act autonomously while still managing permissions through static roles, hard-coded logic, and spreadsheets. It’s a model that barely worked for humans. For machines, it’s a liability.

An AI agent can execute thousands of actions per second. If one of those actions is misconfigured—or maliciously prompted—it can cascade through a production environment long before anyone intervenes. A single over-permissioned key can become a self-inflicted breach.

The ROI trap

There’s a second, more subtle pressure at play: the race to prove return on investment.

As Todd Thiemann, principal analyst at Omdia, explained, “Enterprise IT teams are under pressure to demonstrate a tangible ROI of their generative AI investments, and AI agents are a prime method to generate efficiencies to generate ROI. Security generally, and identity security in particular, can fall by the wayside in the rush to get AI agents into production to show results.”

It’s a familiar pattern: innovation first, security later. But the stakes are higher when the technology can act independently. “You don’t want all of the permissions the human user might have being given to the agent acting on behalf of the human,” Thiemann said. “AI agents lack human judgment and contextual awareness, and that can lead to misuse or unintended escalation if the agent is given broad, human-equivalent permission.”

It’s easy to assume that an AI working on your behalf should inherit your permissions, but that’s precisely what creates exposure. If the model goes off-script—or if its prompt chain is manipulated—it can perform high-risk actions with human-level authority and zero human restraint.

Thiemann gave a simple, real-world example: an agent that automates payroll validation should never have the ability to initiate or approve money transfers, even if its human counterpart can. “Such high-risk actions should require human approval and strong multi-factor authentication,” he added.

This isn’t just a best practice—it’s an existential control.

Building smarter boundaries

Neray frames the problem differently but arrives at the same conclusion. Authorization is the deterministic layer that must contain probabilistic systems. “You can’t reason with an LLM about whether it should delete a file,” he told me. “You have to design hard rules that prevent it from doing so.”

That’s where the idea of automated least privilege comes in—granting only the permissions necessary for a specific task, for a defined period of time, and automatically revoking them afterward. It’s access as a transaction, not a permanent entitlement.

I’ve seen this shift before. In cloud security, continuous monitoring replaced static configurations. In data governance, policy automation replaced manual approvals. Now, authorization must make the same leap—from passive to adaptive, from compliance to real-time control.

Oso Security is one company trying to operationalize that transition, turning authorization into a modular, API-driven layer rather than bespoke code scattered across microservices. It’s a pragmatic fix for a systemic problem. As Neray put it, “We spent a decade making authentication easier with Okta and Auth0. Authorization is the next frontier.”

Governance, not prohibition

CISOs are starting to understand this. Many are getting involved earlier in AI deployment cycles, not to block innovation but to make it sustainable. Bans don’t work. Guardrails do.

The challenge is balancing speed with safety—allowing agents to act autonomously within clearly defined boundaries. In practice, that means limiting privileges, enforcing human-in-the-loop checks for sensitive actions, and logging every access decision for visibility and audit.

As Thiemann noted, “Minimizing those privileges can minimize the potential blast radius of any mistake or incident. And excessive privileges will lead to auditing and compliance issues when accountability is required.”

Trust, but verify—continuously

Autonomy isn’t about removing humans from the loop; it’s about redefining where the loop sits. Machines can handle repetitive, low-risk actions at speed. Humans should remain the final checkpoint for high-impact ones.

The organizations that get this balance right will move faster with fewer mistakes—and they’ll have the telemetry to prove it. Those that don’t will end up throttling innovation or explaining preventable failures to regulators and investors.

AI doesn’t just change what’s possible—it changes what’s tolerable. The future of safe autonomy depends less on how smart the models become and more on how intelligently we design their boundaries.

Machines don’t need more power. They need better permissions.

Read Entire Article