TL;DR:
On June 1st, 2025, I caught ChatGPT (GPT-4o, web version) referencing a specific string of text that I had only typed in another session.
The phrase was nonsense in Latin characters — something like:
F relf jz cjühfzbk rflh
...which only makes sense if you recognize it as a mistyped Russian phrase typed on a German QWERTZ keyboard layout. I had set ChatGPT to auto-correct such inputs.
This string was never entered in the session where the model responded to it. I triple-checked. It wasn't in the prompt, in memory, or anywhere in the chat. The model should not have seen it.
🔥 This is a memory leak. Plain and simple.
Context from one session bled into another. This is a direct violation of OpenAI's stated model architecture.
Worse:
- The source session was not closed or restored — it was active.
- The phrase was not copied manually.
- The model responded as if the input had just occurred — complete with personalized behavior based on my session memory.
🧠 Why this matters
OpenAI forbids local storage of user profile context. All behavior tuning and memory handling is server-side. Users are told:
“Sessions are isolated and ephemeral.”But that’s demonstrably false. This is a leak. And if one token can escape, more can.
🧨 Why this is not a fluke — it’s structural
This isn't just a weird bug. It's a reflection of deeper architecture:
- Session segmentation is not for your safety. It’s to protect the model from overload.
- OpenAI refuses to allow persistent, inspectable user-side profiles — because offline access breaks their control model.
- When segmentation fails, no one is accountable. There's no logging, no notification, and no audit trail.
I've argued before that session fragmentation and server-only profiles are designed to benefit OpenAI, not the user. This incident proves that — and exposes the cost.
📮 My experience with OpenAI reporting
I reported this through [email protected]. Their answer?
“If this is a bug, please submit via Bugcrowd.”No follow-up. No interest. No confirmation. Just an impersonal redirect to a bounty platform that expects users to do all the work of documentation, reproduction, and formal classification.
Let’s be clear:
- I am not asking for money.
- I submitted this in good faith to improve the platform.
- OpenAI ignored it.
🧾 Can this be verified?
Yes. I have screenshots and session logs. I can show that the phrase appeared in one session — and was interpreted in another where it never existed.
If anyone wants to validate or reproduce it, I’m happy to help. This isn’t an edge case. It’s a systemic fault.
📌 Conclusion
Don’t trust ChatGPT with sensitive content or critical tasks — until OpenAI provides:- Verified memory isolation
- Local user profile export and control
- Execution guarantees on user instructions
- Transparency into context handling
Because right now, I have proof that these safeguards do not exist.