Novee has introduced a new AI red teaming capability for large language model (LLM) applications as part of its AI penetration testing platform, aiming to help organizations detect and fix vulnerabilities before they can be exploited. The launch comes as enterprises accelerate the adoption of AI-powered tools, including chatbots, copilots, and autonomous agents, which are introducing a new category of security risks As AI-driven applications become more embedded in business operations, security teams are encountering threats that traditional testing tools were not built to handle. These include prompt injection, jailbreak attempts, unauthorized data exposure, and manipulation of agent behavior. Novee’s solution is designed specifically to address these emerging risks by simulating real-world attack scenarios against AI systems.

Unlike conventional penetration testing tools that focus on web applications or infrastructure, Novee’s AI-powered testing agent continuously probes AI-enabled systems. It autonomously mimics sophisticated attacker techniques, combining multiple attack methods to uncover vulnerabilities that static scans or manual testing might overlook. Organizations can deploy the agent across a wide range of AI applications, from customer-facing chatbots to internal automation tools and complex LLM-driven workflows. The platform evaluates how these systems respond under adversarial conditions and generates detailed assessments, including clear recommendations for remediation.

Ido Geffen, CEO and co-founder of Novee, emphasized that attackers are moving faster than ever, significantly reducing the time between identifying a vulnerability and exploiting it. He noted that this reality makes continuous testing essential, rather than relying on periodic security assessments.

The development of the platform is closely tied to Novee’s internal research efforts. The company’s security team has been actively identifying high-impact vulnerabilities in AI systems, including a recently disclosed issue affecting a coding assistant that could allow attackers to manipulate its context and execute malicious code on a developer’s machine. Insights from such discoveries are continuously incorporated into the platform, enabling it to evolve alongside emerging attack techniques.

Gon Chalamish, co-founder and Chief Product Officer at Novee, highlighted that many organizations are still applying legacy security tools to AI systems, despite the fundamentally different nature of their attack surface. He stressed the need for security approaches that replicate how adversaries actually target AI applications.

The platform is designed to work across different AI ecosystems, supporting applications built on leading model providers such as OpenAI, Anthropic, as well as open-source frameworks. It also integrates with existing security workflows and CI/CD pipelines, allowing teams to incorporate AI security testing into their regular development processes With this launch, Novee is positioning its platform as a proactive solution for securing AI-driven applications, helping organizations stay ahead of rapidly evolving threats in the age of intelligent systems.

Recommended Cyber News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com 



🔒 Login or Register to continue reading