On
29 May 2025I privately reported a vulnerability to the OpenAI disclosure mailbox using an encrypted email. The flaw allows peeking at chat responses intended for other users. This content may contain personal data, confidential business plans, or proprietary code. OpenAI acknowledged receipt with an automated reply, but I haven't received a human follow-up (as of the 16th of July), and the issue remains unpatched.
I chose to report this vulnerability via official disclosure email rather than through the bug bounty platform because of concerning terms in their disclosure agreement. When you submit through their portal, you're required to agree not to share any information about the issue you found - essentially a blanket non-disclosure that prevents researchers from discussing their findings publicly, even after remediation.
This approach seems misaligned with the broader security community's values and contrasts sharply with companies like Google, who encourage responsible disclosure and allow researchers to publish details after fixes are deployed. Transparency in security research benefits everyone by advancing collective knowledge and holding companies accountable for timely fixes.
I have followed the industry‑standard 45‑day disclosure window (CERT/CC, ISO/IEC 29147) as a good‑faith effort to somehow response to my report. Because the vulnerability still exists and because users are unknowingly at risk, I am issuing this limited, non‑technical disclosure:
No exploit code, proof‑of‑concept, or reproduction steps are included here.
Only the fact and severity of the flaw are being disclosed.
Staff the security inbox with humans empowered to respond within 3–5 business days.
Publish a clear vulnerability response policy with service‑level objectives (SLOs).
Conduct periodic third‑party penetration tests that cover model‑to‑model isolation and data governance controls.
Reward, not ignore, good‑faith researchers. Bug bounty goodwill is perishable.
Do not restrict researchers from disclosing issues via the bug bounty portal policies.
I remain ready to collaborate with the OpenAI security team and will gladly test any candidate patch. Users deserve guarantees that their private conversations stay private. Until then, caution is advised.
— A concerned security researcher
github/proton/gmail/X/whatever: requilence
.png)


