As enterprises rapidly adopt autonomous AI systems, Straiker agentic AI security is gaining attention as organizations struggle to manage the risks posed by AI agents operating across critical business environments.

At RSAC 2026, Straiker announced the launch of Discover AI and expanded its Defend AI platform, introducing new capabilities to secure coding agents, productivity agents, and custom built agent platforms. The announcement highlights a growing concern in the cybersecurity industry, where AI agents are being deployed at scale with broad system access, increasing autonomy, and limited security oversight.

AI powered coding tools such as Cursor, Claude Code, and GitHub Copilot are transforming how software is developed, with adoption reaching 85 percent of developers. While this shift is accelerating innovation, it is also introducing new vulnerabilities. Coding agents are now capable of deploying additional agents with minimal human involvement, creating pathways for endpoint compromise, data exfiltration, remote code execution, and manipulation through malicious integrations.

Straiker’s Discover AI is designed to address a critical visibility gap. Many organizations lack a clear inventory of the AI agents operating within their systems, as well as insight into what data those agents can access. Discover AI provides centralized visibility by identifying agents, model context protocol connections, and tools across enterprise environments. It also detects vulnerabilities within these connections and flags misconfigurations such as excessive permissions and unsafe execution modes.

The platform further enhances security by classifying agent interactions based on risk, enabling security teams to better understand behavior patterns and identify potential threats. This level of observability is increasingly important as AI agents interact with enterprise tools such as email systems, customer relationship management platforms, and internal applications.

Alongside Discover AI, Straiker has expanded Defend AI to deliver runtime protection for AI agents. Built on millions of real world agent traces, Defend AI monitors agent actions in real time, detecting malicious instructions such as prompt injection and unauthorized operations. It can also prevent data leakage and stop harmful commands before they impact enterprise systems.

Defend AI supports flexible deployment options, including API based monitoring integrations with platforms like Amazon Bedrock AgentCore, Azure Foundry, and Microsoft Copilot Studio. For organizations requiring active enforcement, the platform can also operate as an inline gateway to block malicious activity instantly.

Industry experts emphasize the urgency of addressing these risks. “Agentic AI is moving from experimentation to production at a pace that governance frameworks simply haven’t caught up with. What’s concerning is that our research shows nearly 80% of organizations have already deployed AI agents in production without formal policies in place to manage them. Most existing infrastructure was built for human users and traditional service accounts, not autonomous agents that can act, adapt and scale on their own. To manage this shift safely, organizations need to adopt a proactive approach that treats AI agents as first-class digital citizens with clear visibility, governance, and Zero Trust controls. We have witnessed attackers shifting from a ‘break-in’ to ‘log-in’ strategy, and the future of cyber threats is shifting towards politely asking an agentic AI for access,” said Ken Buckler, CASP, research director, EMA.

“As an industry, we’re rebuilding how we operate with AI agents at the center. Developers are already shipping with coding copilots, and that’s just the beginning. As agents gain access to code, tools, and enterprise systems, the security stakes grow quickly. It’s encouraging to see companies like Straiker focused on the protections needed to help enterprises adopt agents safely,” said David Levin, CISO, American Express Global Business Travel.

“Agentic AI represents a major shift in how software operates, moving from AI user assistants toward multi-agent systems that can plan, act and interact autonomously across digital environments, introducing new rapidly evolving risks like behavior hijacking and privilege abuse. Addressing these challenges requires an open, community-driven approach to keep up. With contributions from organizations like Straiker, the OWASP Top 10 for Agentic Applications and related guidance were developed to give organizations clear, practical guidance on these emerging risks. As agentic systems rapidly move into production, open community resources help provide a holistic view of emerging threats and mitigations,” said Scott Clinton, co-chair and co-founder, OWASP GenAI Security Project.

As AI adoption accelerates across development and enterprise workflows, Straiker agentic AI security reflects a broader shift toward securing autonomous systems as core infrastructure. By combining visibility, governance, and real time protection, Straiker is helping organizations address emerging risks and build a more secure foundation for the future of AI driven operations.

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com



🔒 Login or Register to continue reading