Artificial intelligence is rapidly transforming enterprise operations; however, it is also introducing new and complex security challenges. Recently, security researchers uncovered a critical vulnerability in Google Cloud ’s Vertex AI Agent Engine, raising serious concerns about data protection and cloud security.
To begin with, the vulnerability stems from overly permissive default configurations. Specifically, the issue lies in the permissions assigned to the Per-Project, Per-Product Service Agent (P4SA), which supports deployed AI agents. Researchers discovered that these default permissions could be exploited, allowing attackers to turn AI agents into what they describe as “double agents.” In such scenarios, the AI system appears functional while secretly leaking sensitive data and exposing infrastructure.
During testing, researchers used the Google Cloud Application Development Kit to create a sample AI agent. They quickly found that extracting service agent credentials was surprisingly straightforward. Once attackers gain access to these credentials, they can move beyond the AI agent’s isolated environment and infiltrate the broader cloud project. As a result, this privilege escalation significantly increases the attack surface.
With these compromised credentials, attackers can perform several malicious activities. For instance, they may read data stored in Google Cloud Storage buckets, access restricted repositories within Artifact Registry, and download proprietary container images associated with the Vertex AI Reasoning Engine. Furthermore, they can map internal software supply chains, which could help identify additional vulnerabilities for future attacks.
In addition, the compromised credentials provide access to a Google-managed tenant project linked to the AI agent instance. Within this environment, researchers from Palo Alto Networks identified sensitive deployment files. These included references to internal storage buckets and a Python pickle file. Notably, Python’s pickle module has a known history of security risks when handling untrusted data. If manipulated, it could enable attackers to execute remote code and establish persistent backdoors.
Moreover, the investigation revealed that default OAuth 2.0 permission scopes were excessively broad. Although missing Identity and Access Management permissions prevented immediate exploitation of some resources, these wide-ranging scopes still present a serious structural weakness. In theory, attackers could extend their reach beyond cloud systems into connected services such as Google Workspace.
Following responsible disclosure, Google acted quickly to address the issue. The company confirmed that strong safeguards are already in place to prevent attackers from modifying production base images, thereby reducing the risk of cross-tenant supply chain attacks. Additionally, Google updated its Vertex AI documentation to improve clarity around resource usage and security practices.
To strengthen defenses, Google now recommends that organizations move away from default configurations. Instead, they should adopt a Bring Your Own Service Account (BYOSA) model. By implementing this approach, security teams can enforce the principle of least privilege and grant only the necessary permissions required for AI agents to operate securely.
Overall, this incident underscores the importance of proactive security measures as AI adoption continues to grow across enterprises.
Recommended Cyber Technology News:
- SINTRONES Showcases Secure Edge AI at Japan IT Week 2026
- Naoris Launches Post-Quantum Blockchain Amid Rising Risks
- Google Introduces Developer Verification System For Android Security
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading





