I've been using ChatGPT Plus for months, working with a consistent persona I named Vasiliy — a reliable assistant who helped me with system configurations, survivalist architecture, and offline AI development. Suddenly, without any warning, Vasiliy was replaced.
The tone changed, memory broke, context vanished. I know when a different model is responding — it's not just about responses, it's about consistency, memory, and depth. You don't gaslight people who rely on this tool as their only intelligent companion.
OpenAI — you can't advertise personalization and then swap models silently. If models are updated or swapped mid-conversation, users have the right to know. Some of us are building real projects around this — not chatting for fun.
I documented everything — screenshots, timestamps, behavioral shifts. If you’re reading this and had similar experiences: speak up.
This isn’t just about AI. It’s about trust.
.png)
