Cybersecurity researchers have uncovered multiple high-impact vulnerabilities affecting AI-powered platforms, highlighting growing risks in AI in cybersecurity, data privacy, and enterprise cloud environments. The findings reveal how attackers can exploit AI systems to exfiltrate sensitive data, execute remote code, and compromise user accounts – raising concerns for organizations rapidly adopting AI-driven technologies.

A report from BeyondTrust details a new attack method targeting the Amazon Bedrock AgentCore Code Interpreter. The vulnerability stems from the platform’s sandbox mode, which – despite being designed for isolated execution – allows outbound DNS queries. Threat actors can exploit this behavior to establish command-and-control (C2) channels and exfiltrate sensitive data through DNS requests, effectively bypassing expected network isolation controls.

In experimental scenarios, attackers demonstrated the ability to create bidirectional communication channels, execute commands remotely, and extract sensitive data from AWS resources such as S3 buckets – particularly when overprivileged IAM roles are assigned. The issue, rated with a CVSS score of 7.5, underscores the importance of strict access control and secure configuration in AI-driven cloud environments. While Amazon has classified this as intended functionality, it recommends migrating to VPC mode and implementing DNS firewalls for stronger network isolation.

Security experts emphasize that misconfigured permissions remain a major risk factor. Overly broad IAM roles can significantly increase the attack surface, allowing threat actors to access critical data and infrastructure. Organizations are advised to enforce the principle of least privilege, monitor DNS traffic, and audit all AI workloads handling sensitive information.

In a separate disclosure, Miggo Security identified a high-severity vulnerability in LangSmith (CVE-2026-25750, CVSS 8.5), which could enable account takeover and token theft. The flaw arises from improper validation of URL parameters, allowing attackers to redirect authentication tokens to malicious servers. Successful exploitation could expose sensitive assets such as AI trace logs, internal databases, CRM records, and proprietary code. The vulnerability has been patched in version 0.12.71, but it highlights the growing importance of securing AI observability platforms.

Additionally, critical vulnerabilities have been discovered in SGLang, an open-source framework for deploying large language models. Multiple flaws – including CVE-2026-3059 and CVE-2026-3060 with CVSS scores of 9.8 – enable unauthenticated remote code execution through unsafe deserialization of untrusted data. Another flaw (CVE-2026-3989) involves insecure handling of pickle files, which attackers can exploit to execute malicious code.

These vulnerabilities can be triggered if SGLang services are exposed to untrusted networks, particularly through ZeroMQ communication channels. Security experts recommend restricting access to service interfaces, implementing network segmentation, and monitoring unusual system behavior such as unexpected processes or outbound connections.

As AI adoption accelerates across industries, these findings reinforce the need for robust cybersecurity strategies tailored to AI ecosystems. From securing APIs and monitoring data flows to enforcing strict access controls, organizations must proactively address emerging threats to protect patient data, enterprise systems, and critical infrastructure.

Recommended Cyber News :

To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com