For years, AI lived in the innovation bucket. Experimentation. Productivity. Talent leverage. Governance, where it existed, sat downstream. Legal would handle policy. Security would handle controls. Engineering would move fast and fix later.

Zscaler’s ThreatLabz 2026 AI Security Report makes that model indefensible. Nearly one trillion AI and machine learning transactions, observed across real enterprise environments in 2025, point to a simple conclusion. 

Zscaler ThreatLabz 2026 AI Security Report

This article is based on findings from the Zscaler ThreatLabz 2026 AI Security Report, which analyzed 989.3 billion AI and machine learning transactions observed across the Zscaler Zero Trust Exchange™ between January and December 2025. 

The report details enterprise AI adoption trends, data exposure risks, agentic AI threats, compromise timelines, and the growing need for Zero Trust architecture and continuous AI governance in modern enterprises.

Key Findings of the Report

AI adoption is accelerating faster than enterprise oversight. Despite 200% AI usage growth in key sectors, many organizations still lack a basic inventory of AI models and embedded AI features, elevating AI governance to a board-level priority.

Enterprise AI systems are vulnerable at machine speed. Most enterprise AI systems could be compromised in just 16 minutes, with critical flaws uncovered in 100% of systems analyzed.

AI capabilities are proliferating rapidly across the enterprise. The number of applications driving AI/ML transactions quadrupled year-over-year to more than 3,400, increasing complexity and reducing centralized visibility.

AI is becoming a high-volume conduit for sensitive enterprise data. Data transfers to AI/ML applications surged 93%, totaling more than 18,000 terabytes, which paints an expanding target on AI platforms for cybercriminals across the globe. 

AI Outpaced Oversight Before Governance Reacted

Zscaler tracked AI activity across more than 3,400 applications, a fourfold increase year over year. Yet many enterprises still lack a basic inventory of AI systems and embedded AI features operating inside their environments.

Diana Kelley, Chief Information Security Officer at Noma Security, works with enterprises contending with this problem. Her framing is blunt:

“AI risks have rapidly moved from a watch list item to a front-line security concern, especially when it comes to data security and misuse. To manage this emerging threat landscape, security teams need a mature, continuous security approach, which includes blue team programs, starting with a full inventory of all AI systems, including agentic components as a baseline for governance and risk management.

Governance frameworks assume the asset being governed is known. AI breaks that assumption. When engineering alone accounts for nearly half of enterprise AI usage, governance cannot exist as a static policy artifact. It has to be operational, continuous, and technical enough to keep up.

Visibility Is the First Failure Point

Most boards believe their organizations have “reasonable visibility” into AI usage. The data suggests otherwise. AI functionality is no longer confined to standalone tools. It is embedded inside platforms that enterprises already trust. 

Features update automatically. Capabilities expand silently. Data flows change without explicit approval cycles.

Kelley points to a deeper problem. Stack sprawl.

“Securing AI in 2026 and beyond is not just about protecting models. It requires addressing stack sprawl and moving toward a platform-driven approach that delivers defense in depth through unified, AI-aware identity, configuration, and data visibility. Organizations that simplify their cloud and AI security stack and enable effective automation will be far better positioned to safely scale AI as threats continue to evolve.

I think the next wave of risk will stem from the broad adoption of agentic AI, systems that leverage the “reasoning” capabilities of LLMs to drive autonomous workflows. To prepare, organizations should implement agentic risk management, starting with established policies and standard operating procedures and supported by technical controls like cryptographic identity attestation and continuous policy enforcement for AI agents. This will allow enterprises to monitor and constrain agent autonomy to gain the benefits of agentic AI without putting the organization at unnecessary risk.”

More tools do not automatically mean more control. In many AI-heavy environments, they mean more blind spots. Boards approving incremental security spend without addressing fragmentation are often funding complexity, not resilience.

Machine-Speed Attacks Break Human-Speed Controls

One of the most destabilizing findings in Zscaler’s report is time.

Enterprise AI systems were found to be compromise-ready in as little as 16 minutes, with critical weaknesses identified across all analyzed environments. That window matters because it erases the buffer most governance models depend on.

Detection, review, escalation, remediation. These processes assume time. AI-driven attackers do not.

They probe continuously. They adapt instantly. They exploit misconfigured identities, over-permissioned agents, and unsecured integrations faster than human-dependent defenses can respond.

Agentic AI Turns Policy Gaps Into Operational Risk

Static AI introduces exposure. Agentic AI introduces autonomy. As systems begin to reason, act, and chain workflows across domains, weak governance stops being a passive risk and becomes an active liability.

Agentic systems do not wait for quarterly reviews. Governance that is not enforced technically and continuously does not apply to them at all.

Boards approving agentic AI strategies without demanding equivalent advances in identity, enforcement, and monitoring are delegating authority without control.

Human-Centric Defense Models No Longer Scale

The threat environment has shifted faster than most operating models.

Attackers are automating at scale. Defenders are still adding people and point solutions.

Ram Varadarajan, CEO of Acalvio, has spent years designing deception-based defenses for automated attackers. His assessment is unsentimental:

“Security teams can no longer depend on humans doing everything by hand.  The model has to change to allow humans to direct AI-driven workflows, just as hackers do. It’s destined to be a bot-on-bot duel forevermore. Teams should start small.  Pick a few high-impact workflows where AI provides scale and speed, and humans supply judgment and oversight.  Assume a machine-speed AI-augmented attacker or autonomous AI attack, and defend with machine-speed AI that leverages the adversarial AI’s own vulnerabilities.” 

He goes further, describing the asymmetry that boards often underestimate:

“AI-driven expansion is now outpacing the ability of traditional, human-dependent defenses to respond in real-time.  Defenders are expending finite resources against adversaries whose AI automation is driving attack costs toward zero, a gap that’s not going to be closed by adding more disconnected defensive security tools. Let’s face it, clouds are going to continue to sprawl – that’s a reality.  To be able to scale with the attackers, AI-first cloud security has to shift from reactive blocking to AI-driven preemptive defense. The key to scaling defense on the cloud will be to use an AI-driven, real-time deception fabric to target the known cognitive and computational limits of attacker AI, imposing asymmetric conditions of compounding uncertainty and computational exhaustion.”

This reframes governance economics. The question is no longer whether defenses exist. It is whether they can operate at the same speed and scale as AI-driven threats.

SOC Automation Changes Where Accountability Lives

AI is not only reshaping external threats. It is rewriting internal security operations.

Kamal Shah, CEO of Prophet Security, focuses on agentic AI inside Security Operations Centers. He points to how quickly this shift is arriving:

“According to the State of AI in SOC Report, security leaders anticipate AI will handle approximately 60% of SOC workloads within the next three years. AI enables them to move faster through noise, automate repetitive and tedious work, and spend more time on the parts that require human judgment.”

This level of automation is unavoidable. Alert volumes demand it. But automation compresses decision cycles and shifts accountability upstream.

Shah emphasizes what governance must adapt to:

“AI speeds up the work, teams chain skills, and incentives push toward scale. Security teams should shorten time to answer with outcomes that clearly state scope, impact, affected assets, and next actions, backed by evidence the business can trust. Treat coordinated disclosure as core infrastructure with a clear VDP or bug bounty program, simple reporting, defined SLAs, safe harbor language, and consistent communication, then keep tight feedback loops with researchers because responsiveness improves report quality and reduces time to fix.”

AI also becomes a learning surface.

“By observing how AI is being used to automate repetitive tasks, SOC teams can study these automated methodologies to understand the tempo and velocity of modern attacks. AI SOC tools are giving security analysts similar capabilities in handling repetitive tasks such as alert triage and investigation, freeing their time to focus on higher priority security tasks. By integrating the reports from ethical hackers with new AI defenses, SOC teams can create a practical training ground for junior analysts, helping them transition into high-level operators who proactively hunt for threats, rather than performing manual triage.”

Governance here is not about slowing response. It is about ensuring decisions made faster are still defensible.

Speed to Market Is Eroding Security Foundations

Some governance failures are not accidental. They are incentivized.

Randolph Barr, Chief Information Security Officer at Cequence Security, sees the same pattern across fast-moving product teams:

“AI is rapidly evolving from simple automation to deeply personalized, context-aware assistance—and it’s heading toward an Agentic AI future where tasks are orchestrated across domains with minimal human input.”

The pressure to ship creates shortcuts.

“In the haste to bring AI to market quickly, engineering and product teams often cut corners to meet aggressive launch timelines. When that happens, basic security controls get skipped, and those shortcuts make their way into production. So, while organizations are absolutely starting to think about model protections, prompt injection, data leakage, and anomaly detection, those efforts mean little if you haven’t locked down identity, access, and configuration at a foundational level. Security needs to be part of the development lifecycle from day one, not just an add-on at the time of launch.”

Security added late rarely compensates for governance skipped early. Boards should recognize this sequencing risk for what it is.

AI Governance Is Now a Board-Level Control Issue

There is no safe pause button for AI adoption. Competitive pressure makes that clear. However, accelerating without modern governance guarantees unmanaged risk.

Zscaler’s warning is not alarmist. It is observational. AI has already scaled beyond the assumptions most governance frameworks were built on. Visibility, digital identity, and enforcement now have to operate continuously, at machine speed, across systems that do not announce themselves. AI governance is no longer a policy problem. It is a Zero Trust execution problem.

FAQs

1. What is AI governance, and why is it critical for enterprises today?

AI governance refers to the policies, processes, and controls an organization uses to ensure AI systems are safe, transparent, compliant, and aligned with business objectives. It’s critical because AI now operates at machine speed and scale, creating risks that traditional oversight can’t manage.

2. How does agentic AI differ from traditional AI, and what risks does it pose?

Agentic AI autonomously plans, acts, and adapts without constant human input, unlike traditional models that only assist with tasks. Its autonomy increases security and governance risks due to unpredictable actions and reduced human control.

3. What key questions should boards ask to assess their organisation’s AI governance maturity?

Boards should ask about accountability structures for AI governance, how AI risks are monitored over time, and how third-party AI tools and vendors are managed. Such questions help reveal gaps in oversight and continuous risk control.

4. Why is continuous monitoring essential for AI governance?

AI systems and their data flows evolve rapidly, often updating capabilities automatically. Continuous monitoring ensures visibility into AI activity, enables real-time risk detection, and prevents silent governance failures.

5. What governance frameworks or tools can enterprises adopt to manage AI risk?

Enterprises can align with risk management frameworks like NIST AI RMF or similar standards that embed governance, risk, and compliance throughout the AI lifecycle. These frameworks support policies, ongoing oversight, and measurable risk controls.

To participate in upcoming interviews, please reach out to our CyberTech Media Room at info@intentamplify.com