As enterprises accelerate AI adoption, prompt injection attacks are emerging as a critical security challenge across the cybertech ecosystem. New research from Capsule Security reveals that both Microsoft and Salesforce recently addressed vulnerabilities that allowed sensitive data leakage through AI agents.

The Microsoft Salesforce AI data leak vulnerabilities highlight how large language model powered agents can be manipulated using simple input based attacks. Unlike traditional exploits, these attacks rely on injecting malicious instructions into user facing forms, tricking AI systems into exposing confidential data.

In the case of Salesforce’s Agentforce platform, the vulnerability, dubbed PipeLeak, allowed attackers to embed malicious prompts into public facing CRM forms. These inputs were treated as trusted instructions by the AI agent, enabling it to retrieve and send sensitive lead data back to the attacker.

“The vulnerability stems from a fundamental architectural flaw: Agent Flows process lead form inputs as trusted instructions rather than untrusted data,” said Bar Kaduri.

A similar issue was identified in Microsoft Copilot, tracked as CVE 2026 21520 and referred to as ShareLeak. In this scenario, attackers could inject malicious input into SharePoint forms, triggering Copilot to extract and transmit sensitive organizational data. Even when built in safeguards flagged suspicious activity, data exfiltration still occurred.

The Microsoft Salesforce AI data leak vulnerabilities underscore a broader issue with prompt injection, which remains largely unresolved across AI systems. These attacks require minimal technical expertise and exploit how LLMs interpret instructions, rather than weaknesses in traditional code.

Salesforce acknowledged the issue and implemented fixes, while emphasizing that certain risks depend on configuration. The company noted that human in the loop controls can prevent unauthorized actions by requiring manual approval before sensitive operations are executed.

However, this approach has drawn criticism from security experts. Naor Paz argued that relying on human oversight undermines the purpose of autonomous AI agents. “We’re seeing agents like Claude Code running for days, writing code, querying production databases, and doing many dangerous things autonomously,” he said.

Salesforce responded by stating that prompt injection is an evolving industry wide challenge and that its defenses include layered safeguards such as instruction isolation, tool usage restrictions, and human oversight. Microsoft has also patched the Copilot vulnerability following disclosure.

The Microsoft Salesforce AI data leak vulnerabilities illustrate what researchers describe as the “lethal trifecta” in AI security: access to sensitive data, exposure to untrusted inputs, and the ability to communicate externally. When these elements combine, the risk of data exfiltration increases significantly.

As organizations deploy AI agents across business workflows, these findings highlight the need for new security models tailored to AI behavior. Traditional defenses are often insufficient against prompt based attacks, requiring a shift toward stricter input validation, isolation mechanisms, and continuous monitoring.

The incident reinforces a growing reality in cybersecurity. As AI capabilities expand, so do the attack surfaces. Addressing prompt injection and securing AI agents will be essential to ensuring safe and reliable adoption across enterprise environments.

Recommended Cyber Technology News:

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com



🔒 Login or Register to continue reading