Ping Identity has raised serious concerns about the growing authorization risks associated with AI agents in enterprise environments. In its latest commissioned research, conducted by KuppingerCole Analysts, the company highlights how traditional identity models are failing to keep pace with the rapid adoption of autonomous AI systems.

According to the report, organizations are deploying AI agents into live production environments much faster than they are implementing the necessary governance controls. As a result, enterprises face a critical gap in managing how these agents operate once they gain access. While permissions may appear valid individually, AI agents can combine them in ways that unintentionally bypass established security measures.

Moreover, this shift significantly changes how organizations must think about identity and access management. Instead of focusing solely on who or what has access, businesses must now evaluate how that access is actively used in real time. Traditional IAM systems were designed for human users making discrete decisions, not for AI agents continuously interacting across systems, workflows, and datasets.

Andre Durand, Chief Executive Officer & Founder of Ping Identity, emphasized the urgency of the situation.

“Enterprises are deploying autonomous AI faster than they can govern it,” Durand said.

“Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs.”

Furthermore, the research identifies several vulnerabilities within current identity frameworks. One major issue is delegation opacity, where AI agents trigger sub-agents or pass tasks through complex chains that become increasingly difficult to audit. At the same time, widely used identity standards such as OAuth and OpenID Connect rely on assumptions about human behavior, which no longer apply when machines independently interact with systems.

In addition, the report highlights the growing risk of context leakage across platforms. If organizations fail to continuously re-evaluate authorization decisions, sensitive data and permissions can unintentionally propagate across systems. Consequently, this raises new concerns around accountability, liability, and permission inheritance when AI agents act on behalf of users or other agents.

These challenges are not theoretical. The study references findings from IBM’s 2025 Cost of a Data Breach report, which revealed that 13% of organizations have already experienced AI-related security breaches, while 97% still lack sufficient access controls for AI systems. Additionally, incidents such as prompt injection attacks and enterprise data leaks further demonstrate how attackers can exploit gaps in AI governance.

To address these risks, the analysts propose a modern governance framework centered on identity, policy-based authorization, oversight, and accountability. Notably, this approach extends Zero Trust principles into continuous authorization, ensuring that systems validate not just identity at login, but also intent, context, and policy at every action point.

Martin Kuppinger, Founder of KuppingerCole Analysts, explained the broader implications of this shift.

“These trends reflect a broader shift in identity requirements,” Kuppinger said.

“As autonomous agents become more prevalent, organisations will need to extend identity and authorisation models to maintain control, accountability, and trust across increasingly dynamic environments.”

In response, Ping Identity stated that its Identity for AI solutions align with these evolving requirements. The company focuses on enabling runtime identity, dynamic policy enforcement, and governance controls specifically designed for AI agents. Additionally, KuppingerCole recognized Ping Identity’s approach to assigning unique identities to AI agents while maintaining human accountability.

Ultimately, the research underscores a critical industry transition. As enterprises move AI systems from experimentation into full-scale deployment, they must rethink identity as a continuous, real-time control mechanism. Simply granting access is no longer sufficient—organizations must ensure that control persists throughout every action an AI agent performs.

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com  



🔒 Login or Register to continue reading