The Question That Security Teams Cannot Currently Answer
Ask the average enterprise security team how many AI agents are running across their environment right now. Ask them what identities those agents are using, what tools they are calling, what MCP servers they are connecting to, and what data they are accessing in real time.
Most cannot answer any of those questions with confidence. Some cannot answer them at all.
That is not a governance documentation failure or a policy gap. It is a fundamental visibility failure at a layer of enterprise infrastructure that is growing faster than any security tooling category has historically managed to govern before incidents begin to accumulate. Permiso Security’s AI agent runtime security announcement, validated by a Fortune 500 deployment at Autodesk, addresses that visibility failure with a set of capabilities that the market has been building toward without yet producing in integrated form.
The distinction Permiso is drawing, between posture management and runtime security, is the analytical center of this announcement and the most important concept for enterprise security leaders to internalize before their next AI governance investment decision.
As enterprises confront AI agent visibility gaps, another hidden risk often remains buried in contracts. Vendor AI accountability, data usage rights, compliance obligations, and governance commitments frequently sit in static agreements with limited visibility. Agiloft CLM + AI transforms that contract data into actionable intelligence for stronger enterprise AI governance.
Why Posture Is Not Enough and What Runtime Actually Means
The AI agent security market has coalesced significantly around posture management in the past 12 months. Vendors across the identity governance, cloud security posture management, and non-human identity categories have built capabilities that answer posture questions: where agents are deployed, how they authenticate, what permissions they hold, and whether those permissions align with organizational policy.
Posture management answers those questions at a point in time. It produces a snapshot of the agent environment as it exists when the scan or assessment runs. That snapshot has genuine value for governance documentation, compliance reporting, and access review processes.
It does not answer the question that Permiso co-CEO Jason Martin identifies as the one that actually keeps security professionals awake: not what an agent is allowed to do, but what it is doing right now, and whether you can stop it.
The distinction matters because AI agents are non-deterministic systems. Their behavior at runtime is context-dependent, shaped by the inputs they receive, the tool calls they make, the data they access, and the sub-agents they spawn in response to evolving task requirements. An agent that holds appropriate permissions at the posture assessment level can still behave in ways that create security incidents at the runtime level, accessing data outside its intended scope, calling tools in unexpected sequences, or interacting with downstream systems in ways that no static permission review would flag.
Traditional identity providers lose visibility the moment an agent authenticates. NHI security tools that treat agents as static machine identities miss the human-like behavioral dimension that makes agent security fundamentally different from service account governance. Permiso’s runtime approach maintains continuous visibility through the full agent lifecycle, from code repository through deployment, active operation, and containment, capturing the behavioral evidence that posture snapshots cannot produce.
The Six Capabilities and Why Each Addresses a Distinct Gap
The Permiso platform’s six core AI agent runtime security capabilities are not feature additions to an existing product. They address six distinct failure modes in how current security tooling handles AI agent environments, and understanding each failure mode clarifies why the integrated capability set matters more than any individual component.
Discovery That Reaches Beyond Traditional Identity Tooling
The agent and sub-agent discovery capability inventories AI agents running in Lambdas, containers, and virtual machines that traditional identity tools cannot see. That coverage scope is significant because AI agent deployments frequently leverage serverless and containerized execution environments precisely because they offer the scaling characteristics that agentic workloads require. A discovery capability that covers SaaS and IdP-registered agents but misses the Lambda functions and containers running agentic workloads is producing an incomplete inventory that creates a false confidence risk.
The explicit coverage of shadow agents, those running outside formal registration and governance processes, addresses the deployment pattern that most commonly produces the worst security outcomes. Ungoverned agent deployments are the agentic equivalent of shadow IT, carrying all of the access and data exposure risks of formally deployed agents without any of the governance controls.
Identity Attribution That Maintains the Accountability Thread
The identity attribution capability ties every agent run, event, tool call, and MCP invocation to a specific human, non-human, or AI identity, visualized through Permiso’s agent graph and preserved as a complete audit trail.
That accountability thread is what the Autodesk Chief Trust Officer Sebastian Goodwin identifies as non-negotiable for enterprise AI security at scale. When an AI agent performs an action that requires investigation, the ability to trace that action back to the initiating identity, through whatever chain of sub-agent delegation and tool calling preceded it, is the forensic foundation that makes incident response possible. Without it, investigation of AI agent incidents requires reconstructing accountability chains from fragmented logs across multiple systems, a process that is time-consuming at best and impossible at worst.
The MCP server invocation tracking dimension is particularly forward-looking. Model Context Protocol has emerged as a significant integration standard for connecting AI agents to external tools and data sources. As MCP adoption grows across enterprise AI deployments, the ability to track which MCP servers an agent is calling, with what frequency and what data, becomes a critical visibility requirement that most current security tooling does not address.
Tool and Infrastructure Observability as the Behavioral Evidence Layer
Capturing what tools an agent called, what data it accessed, and what downstream systems it reached provides the behavioral evidence layer that distinguishes meaningful anomaly detection from alert volume generation.
An anomaly detection system that knows an agent called a specific tool in an unexpected sequence, accessed a data store outside its typical pattern, and subsequently reached a downstream system it had not previously interacted with has the behavioral context to generate a high-confidence, high-priority alert. An anomaly detection system working from permission data alone can only flag when an agent attempts something outside its authorized scope, missing the class of incidents where agents use authorized capabilities in unauthorized patterns.
Runtime Detection Surfaced in Existing Alert Infrastructure
The integration of AI agent runtime alerts into the same module security teams already use for human and non-human identity threats is an adoption design decision that deserves specific recognition.
Introducing a new alert console for AI agent security creates alert fatigue fragmentation, splitting security team attention across multiple interfaces and reducing the likelihood that agent-related alerts receive the response urgency they require. Surfacing agent behavioral anomalies, over-privileged access, unusual tool usage, policy violations, and high blast radius behavior alongside existing identity threat alerts maintains workflow continuity for security operations teams and ensures that agent security incidents compete for response priority on the same basis as other identity threats.
Behavioral Skill Sandboxing for Pre-Deployment Risk Assessment
The behavioral skill sandboxing capability addresses a pre-deployment risk assessment gap that most AI agent security programs have not yet built. New agent skills, the tools and capabilities that agents can invoke, carry behavioral risk profiles that are difficult to assess through code review alone given the non-deterministic nature of how agents use those skills in practice.
Sandboxing that simulates agent skill behavior in an isolated environment before deployment provides an empirical basis for risk assessment that complements static code analysis and permission review. For organizations deploying agents with access to sensitive data stores or critical infrastructure systems, pre-deployment behavioral validation is the kind of control that can prevent the class of incidents that occur when an agent skill behaves differently in production than its specification suggested.
Kill Switches and Approval Gates at Machine Speed
The identity-first controls section, including least privilege recommendations based on actual observed agent behavior, approval gates for high-risk actions, and kill switches that operate at machine speed, addresses the response capability gap that discovery and detection alone cannot close.
Detection without response capability produces alerts that document incidents rather than preventing or containing them. Kill switch capability that operates at the speed agents act, rather than at the speed human analysts can respond, is what converts runtime visibility into runtime protection. The distinction between a security tool that tells you an agent did something problematic after the fact and one that can stop that behavior as it occurs is the difference between forensic capability and active defense.
The Autodesk Validation and What It Signals to Enterprise Buyers
Autodesk’s deployment of Permiso’s AI agent runtime security capabilities is the most commercially significant element of this announcement for enterprise security teams evaluating the market.
Autodesk is a Fortune 500 organization with a complex, global technology environment spanning workforce AI deployment, cloud infrastructure, and AI-powered product features. The Chief Trust Officer’s description of the deployment, covering agent discovery, full registry maintenance, identity attribution, and continuous event monitoring across runs, tool calls, and data access, describes a production-scale implementation rather than a proof-of-concept evaluation.
The framing that Permiso was already Autodesk’s identity security platform before extending to agentic AI identities is the most important commercial signal in the announcement. It confirms the platform extension model that Permiso co-CEO Paul Nguyen articulates explicitly: not asking customers to buy a new product, but extending the platform they already trust to cover the fastest-growing and least-governed identity class in the enterprise.
For security teams already managing identity security programs, that extension model reduces the evaluation and procurement friction that new product categories typically require. An existing identity security platform that can extend to AI agent runtime coverage without a separate deployment, integration, and training investment represents a materially lower adoption barrier than a purpose-built AI agent security tool requiring parallel deployment alongside existing infrastructure.
LLMjacking and the Threat Research Foundation
The reference to Permiso’s P0 Labs research, including the discovery of LLMjacking attack techniques, cross-prompt injection vulnerabilities in enterprise AI copilots, and analysis of malicious AI agent skills across public marketplaces, provides a threat intelligence foundation that distinguishes the platform’s detection capabilities from those built on theoretical attack modeling.
LLMjacking, the unauthorized use of compromised cloud credentials to access large language model APIs at the victim organization’s expense, is a documented and growing threat pattern that represents a specific economic and security risk for organizations running AI workloads in cloud environments. Detection capability built from first-party research into actual LLMjacking campaigns is more likely to surface true-positive alerts than detection logic built from theoretical threat models.
The cross-prompt injection research is particularly relevant given the growing deployment of AI copilots with access to enterprise data sources. Cross-prompt injection attacks manipulate AI systems into performing unintended actions by embedding malicious instructions in data that the AI processes, a threat category that requires specific detection logic calibrated to AI agent behavior patterns rather than conventional network or endpoint attack signatures.
What This Means for the AI Identity Security Market Trajectory
Permiso‘s announcement, alongside the parallel announcements from SailPoint, Akeyless, Semperis, and other identity vendors addressed earlier in this editorial series, confirms that AI agent identity security is completing its transition from an emerging concern to an active enterprise procurement category.
The vendors establishing meaningful positions in this category share a common architectural characteristic: they are extending proven identity security capabilities into the AI agent domain rather than building standalone AI security tools that require parallel deployment and management. That extension model is winning in enterprise evaluation contexts because it reduces adoption friction, leverages existing deployment infrastructure, and maintains the governance continuity that compliance-focused buyers require.
What Permiso contributes that differentiates it within that competitive set is the runtime behavioral layer. Posture management is becoming table stakes across the identity security vendor landscape. Runtime visibility into what agents are actually doing, who they are doing it as, what tools they are calling, and whether their behavior deviates from established patterns is the capability dimension where enterprise security programs have the most acute gap and the least mature tooling.
The organizations that move quickly to establish runtime visibility into their AI agent environments will be better positioned to detect, investigate, and contain the agent security incidents that Permiso co-CEO Jason Martin correctly identifies as inevitable. Agents will do things they were not supposed to do. The security question is whether the visibility infrastructure exists to know when that happens and the response capability to act before the consequences compound.
Research and Intelligence Sources: Permiso Security
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading




