Terra Security, a leader in agentic Continuous Threat Exposure Management (CTEM), has revealed significant security findings from its recent continuous penetration testing engagements. The company identified exploitable vulnerabilities across AI-powered applications, copilots, and AI-generated code workflows. As a result, Terra has introduced a new module within its continuous penetration testing platform, enabling security researchers to simulate large-scale attacks on AI systems and proactively uncover hidden risks.
Over the past several months, Terra Security conducted adversarial testing on applications developed using AI coding tools such as Claude Code, rapid AI app-building platforms like Loveable and Base44, and enterprise software integrating AI chat interfaces and copilots. Through this testing, researchers detected recurring vulnerability patterns that differ notably from traditional software security flaws. For instance, Terra’s team discovered CVE-2026-25724 in Anthropic’s Claude Code, highlighting the evolving nature of AI-driven security risks.
Cyber Technology Insights: TerraMaster Unveils BBS Backup Solution for Enterprise Security
More importantly, the company reported that it found AI-related security vulnerabilities in 100% of applications embedding AI chats or copilots. This consistent trend signals a broader industry challenge as organizations rapidly deploy AI-driven systems into production environments.
Key AI Security Risks Identified
During real-world enterprise testing, Terra Security observed several high-impact vulnerabilities, including:
- Prompt injection attacks targeting AI copilots
- Indirect prompt injection via embedded or third-party content
- Leakage of sensitive system prompts
- Cross-tenant data exposure within AI copilots
- Privilege escalation through AI tool execution chains
- Reverse shell execution through AI-enabled command workflows
- Broken authorization logic in AI-generated business processes
- Exposure of internal APIs during AI-assisted feature expansion
- Cross-site scripting via LLM prompt injection with authentication bypass
These findings demonstrate how AI systems can introduce complex security gaps when embedded deeply into operational workflows.
“Some of these issues did not stem from malicious intent or overt misconfiguration, but from complex interactions between AI agents, application logic, and operational tooling,” said Shahar Peled, CEO and Co-founder of Terra Security. “With AI systems committing code with vulnerabilities, modifying configurations, and interacting with pipelines, organizations need visibility into real-world exploitability in production environments, not just theoretical risk. We are proud to be able to provide the means for pentesters to monitor these actions continuously using the Terra platform.”
Cyber Technology Insights: Multicedi Strengthens Cybersecurity with Armis Centrix Platform
As AI agents receive broader access to repositories, APIs, and infrastructure tools, even minor validation gaps can quickly escalate into enterprise-wide exposure. Although Anthropic has introduced security enhancements to Claude Code to address code-level weaknesses, Terra emphasizes that identifying vulnerabilities in source code alone does not fully determine exploitability in live production systems.
“Traditional scanners look for known patterns,” said Gal Malachi, CTO and Co-Founder of Terra Security. “What we’re seeing with AI-powered systems is contextual vulnerabilities in cases where the model behaves as designed, but the surrounding application or permission model allows unintended outcomes. A prompt injection may not resemble a conventional code flaw, yet it can still expose sensitive data or trigger unsafe actions if safeguards are incomplete.”
Ultimately, Terra Security’s latest module reflects the growing need for continuous, real-world AI security validation. As enterprises accelerate AI adoption, proactive and scalable testing will play a critical role in safeguarding production environments against emerging threats.
Cyber Technology Insights: Terra Launches Exploitability Validation for Web Apps to Strengthen CTEM
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com




