Harness has introduced two new AI-focused security offerings that address both sides of the modern software lifecycle. On one hand, the company launched AI Security, a product designed to discover, test, and protect AI running inside applications. On the other hand, it introduced Secure AI Coding, a new capability within Harness SAST that helps secure code written by AI development tools. Together, these additions expand the Harness DevSecOps platform so teams can protect everything from AI-generated code to AI models operating in production.

As AI continues to reshape software development, organizations now face new security challenges at every stage. According to Harness’s State of AI-Native Application Security report, 61% of new applications are now AI-powered. However, many companies still lack the visibility needed to identify the AI models and agents running in their environments, test them for AI-specific weaknesses, or secure them at runtime. As a result, the attack surface has grown rapidly, while security tools have struggled to keep pace.

At the same time, the development side presents another serious concern. Harness’s State of AI in Software Engineering report found that 63% of organizations already use AI coding assistants such as Claude Code, Cursor, and Windsurf. Although these tools help developers move faster, they also create additional risks. AI-generated code can include the same weaknesses as human-written code, yet it often appears in larger volumes and at a much higher frequency. Consequently, already-burdened application security teams now face even more pressure.

Harness says this creates a dangerous gap across both dimensions of AI adoption: securing what organizations build and securing what they build with. Therefore, the company is positioning its latest launches as a direct answer to that problem.

Unlike many vendors that focus only on one phase of security, Harness emphasizes a connected DevSecOps model. Traditionally, shift-left tools identify vulnerabilities in code before release, while runtime tools defend applications after deployment. However, those systems often remain disconnected. Harness aims to unify those stages by creating a closed loop where teams can detect issues in code, monitor threats in production, and quickly send findings back to developers for remediation.

This unified approach already includes Application Security Testing for SAST and SCA, Software Composition Security for artifact integrity, and STO for posture, governance, and policy visibility across the organization. In addition, Harness protects live applications and APIs through Web Application and API Protection, which monitors and blocks threats in real time. More importantly, runtime findings do not sit unused in a backlog. Instead, they return to developers so teams can resolve root causes before the next release.

Now, Harness is extending that same security loop into AI.

Introducing AI Security

Harness AI Security builds on the company’s API security platform. Since every LLM call, MCP server, AI agent, and connection to external AI services relies on APIs, Harness argues that AI security and API security are closely linked. In other words, the AI attack surface is not separate from the API attack surface; it is an expansion of it. Because of that, organizations must defend against both traditional API risks and newer AI-specific threats such as prompt injection, model manipulation, and data poisoning.

With this launch, Harness has made AI Discovery generally available. This capability continuously identifies AI-related assets across the environment, including LLMs, MCP servers, AI agents, and third-party generative AI services such as OpenAI and Anthropic. It also tracks runtime risks, including unauthenticated APIs calling LLMs, weak encryption, and sensitive or regulated data flowing to external models.

Beyond discovery, the company has also introduced AI Testing and AI Firewall in beta. AI Testing actively examines LLMs, agents, and AI-powered APIs for issues specific to AI-native applications, including prompt injection, jailbreaks, model manipulation, and data leakage. Moreover, it integrates directly into existing CI/CD pipelines so AI security testing becomes a continuous part of the release process.

Meanwhile, AI Firewall focuses on runtime defense. It inspects and filters LLM inputs and outputs in real time, helping block prompt injection attempts, prevent sensitive data exfiltration, and enforce behavioral guardrails before attackers can exploit a weakness. Unlike traditional web application firewalls, AI Firewall is built to understand AI-native threats without relying solely on manually tuned rules.

Harness AI Security with AI Discovery is now available in GA, while AI Testing and AI Firewall remain in beta.

Introducing Secure AI Coding

The second announcement, Secure AI Coding, targets risks created by AI coding assistants.

“As AI-assisted development becomes standard practice, the security implications of AI-generated code are becoming a material blind spot for enterprises. IDC research indicates developers accept nearly 40% of AI-generated code without revision, which can allow insecure patterns to propagate as organizations increase code output faster than they expand validation and governance, widening the gap between development velocity and application risk.”

Katie Norton, Research Manager, DevSecOps, IDC

Harness explains that Secure AI Coding addresses vulnerabilities introduced directly into the codebase by AI tools. Developers today generate and ship more code than ever before, and nearly half of security and engineering leaders remain concerned about the weaknesses hidden inside AI-produced code. Since these code suggestions often arrive in large commits and move quickly through development, conventional review processes may not be enough.

To solve this, Harness SAST moves vulnerability detection closer to the moment code is created. Secure AI Coding integrates with tools like Cursor, Windsurf, and Claude Code, scanning generated code directly inside the IDE. As a result, developers can see inline warnings immediately and send problematic code back to the agent for remediation without leaving their workflow.

“Security shouldn’t be an afterthought when using AI dev tools. Our collaboration with Harness kicks off vulnerability detection directly in the developer workflow, so all generated code is screened from the start.” — Jeff Wang, CEO, Windsurf

What makes the feature more advanced than standard linting tools is its use of Harness’s Code Property Graph. Rather than only scanning the latest code snippet, it evaluates how data moves through the wider application. Because of that, it can detect more complex issues such as injection vulnerabilities and insecure data handling that may only appear when the broader codebase is considered.

Harness Shares Internal Results

Harness also revealed that it faced similar internal challenges while expanding AI across its own platform. As its AI ecosystem grew, the company needed better visibility into API calls, external AI vendor usage, and potential data exposure. After deploying AI Security internally, Harness says it transformed that environment into one that is far more transparent and manageable.

Over the last 90 days, the company says it has tracked 111 AI assets and monitored more than 4.76 million monthly API calls. In addition, it has run 2,500 AI testing scans each week and remediated 92% of identified issues, including critical gaps tied to weak authentication and encryption in MCP tools. The company also says it identified and blocked 1,140 unique threat actors responsible for more than 14,900 attacks targeting its AI infrastructure.

The company summed up that shift by stating: “Securing AI is foundational for us. Because our own product runs on AI, it must be resilient and secure. We use our own AI Security tools to ensure that every innovation we ship is backed by the highest security standards.”

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com