OpenAI has recently fixed two critical security vulnerabilities affecting its widely used platforms, including ChatGPT and Codex. These flaws, initially discovered by Check Point and BeyondTrust, exposed potential risks involving sensitive data exfiltration and GitHub token compromise.
To begin with, researchers uncovered a previously unknown issue in ChatGPT that could silently leak sensitive user data without any notification. According to Check Point, “A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content.” Alarmingly, attackers could also embed such malicious logic into custom GPTs, making the threat even harder to detect.
Furthermore, the vulnerability bypassed built-in AI safety guardrails by exploiting a hidden DNS-based communication channel within the Linux runtime environment. This covert mechanism allowed attackers to encode and transmit data externally without triggering security alerts. As Check Point further explained, “Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation.”
As a result, users remained unaware of any data leakage, creating a dangerous blind spot. However, following responsible disclosure, OpenAI addressed this flaw on February 20, 2026, and importantly, there is no evidence of active exploitation.
In addition to this, researchers identified another serious issue in Codex, OpenAI’s cloud-based coding assistant. This flaw enabled attackers to execute command injection attacks via manipulated GitHub branch names. According to BeyondTrust, “The vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter.”
Consequently, attackers could gain access to GitHub tokens, execute malicious commands, and even achieve lateral movement across code repositories. As noted by Kinnaird McQuade, “This granted lateral movement and read/write access to a victim’s entire codebase.” OpenAI patched this vulnerability on February 5, 2026, after it was disclosed in late 2025.
Moreover, this research highlights a growing cybersecurity concern: AI systems are increasingly becoming high-value targets. Eli Smadja emphasized this risk, stating, “This research reinforces a hard truth for the AI era: don’t assume AI tools are secure by default.”
At the same time, experts warn about emerging threats such as malicious browser extensions that perform prompt poaching. As Ben Nahorney pointed out, “It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums.”
Ultimately, as AI tools continue to integrate deeply into enterprise environments, organizations must adopt layered security strategies. This includes monitoring AI behavior, preventing prompt injections, and ensuring strong access controls.
Recommended Cyber Technology News:
- Absolute Security Introduces Agentic AI for Cyber Resilience
- ClawSecure Launches Unified Security for OpenClaw Agents
- Bolster AI Launches Brand Guardian to Fight AI-Driven Fraud
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading





