Cybersecurity have uncovered a serious vulnerability in Anthropic’s Claude Chrome extension that could allow attackers to inject malicious prompts into the AI assistant simply by luring a user to a webpage without any interaction.
The flaw, dubbed “ShadowPrompt,” exposed a dangerous weakness in how the extension handled trusted domains and web content. According to researchers, attackers could effectively control the assistant’s behavior as if the user had entered commands themselves, all without clicks or visible prompts.
The attack relied on chaining two separate issues. First, the extension used an overly permissive origin allowlist, trusting any subdomain under .claude.ai to send prompts. Second, a DOM-based cross-site scripting (XSS) vulnerability existed in an Arkose Labs CAPTCHA component hosted on a Claude-related domain. This combination allowed attackers to execute arbitrary JavaScript within a trusted context.
In practice, a malicious website could embed the vulnerable component in a hidden iframe and deliver a crafted payload. Once triggered, the injected script would send prompts directly to the Claude extension, which would accept them as legitimate because they originated from an allow-listed domain. The entire process occurred silently, with no indication to the user.
The potential impact of this vulnerability was significant. Attackers could access sensitive data such as authentication tokens, retrieve conversation history, or even perform actions on behalf of the user. In more severe scenarios, this could include sending emails, requesting confidential information, or manipulating workflows through the AI assistant.
Researchers highlighted that this type of attack represents a new class of risk tied to AI-powered browser assistants. As these tools gain deeper access to user data and system functions, they effectively act as autonomous agents making them highly attractive targets for exploitation.
Following responsible disclosure in late 2025, Anthropic addressed the issue by tightening its security controls. The updated extension now enforces strict origin validation, allowing only exact matches to its primary domain. Arkose Labs also patched the underlying XSS vulnerability in early 2026.
Security experts warn that this incident underscores the importance of securing trust boundaries in AI-integrated applications. As browser-based AI assistants become more powerful, even minor weaknesses in domain validation or web components can lead to full compromise of user sessions. The findings serve as a reminder that while AI enhances productivity, it also expands the attack surface requiring stronger safeguards to ensure these systems remain secure.
Recommended Cyber Technology News :
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading




