A major security vulnerability discovered in OpenAI’s ChatGPT platform has raised serious concerns about the safety of user data within AI-driven environments. The flaw allowed threat actors to silently extract user prompts, uploaded files, and other sensitive information without triggering any visible warnings, making the attack particularly difficult to detect.
The issue was identified by researchers at Check Point Research, who found that the weakness existed within ChatGPT’s Python-based data analysis runtime. While the system had strict controls blocking most forms of outbound internet traffic, one overlooked channel—DNS (Domain Name System)—remained accessible and became the key entry point for exploitation.
Attackers leveraged this gap using DNS tunneling, a method where sensitive data is encoded into DNS queries and sent to attacker-controlled domains. Because DNS traffic is typically treated as normal system behavior, these hidden data transfers were able to bypass security alerts entirely, allowing continuous and silent exfiltration.
The vulnerability could be exploited in multiple ways. In one scenario, attackers distributed malicious prompts disguised as helpful tools or “jailbreak” techniques. Once users pasted these prompts into ChatGPT, the attack was immediately activated. In another method, compromised custom GPTs were used as a delivery mechanism, where any interaction involving sensitive data led to instant exposure without requiring additional input from the user.
What made the situation even more alarming was the discovery that the DNS channel was not just one-way. Attackers could also send encoded instructions back through DNS responses, effectively creating a remote command execution channel. This allowed them to run arbitrary commands within the system, potentially accessing highly sensitive data such as financial information, personal files, or medical records processed during sessions.
Following responsible disclosure, OpenAI addressed the vulnerability and released a patch on February 20, 2026. While the immediate risk has been mitigated, the incident highlights a growing challenge in AI security. As platforms like ChatGPT evolve into complex environments capable of executing code and handling critical data, securing every layer—including foundational protocols like DNS—has become essential.
This development serves as a crucial reminder that even trusted infrastructure components can be exploited if not properly secured, reinforcing the need for comprehensive, end-to-end protection in modern AI systems.
Recommended Cyber Technology News:
- Absolute Security Introduces Agentic AI for Cyber Resilience
- ClawSecure Launches Unified Security for OpenClaw Agents
- Bolster AI Launches Brand Guardian to Fight AI-Driven Fraud
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading

