The Emerging Problem of "AI Psychosis"

3 months ago 2

As more people turn to AI chatbots for emotional support and even as their therapists, a new and urgent concern is emerging at the intersection of AI and mental health: "AI psychosis" or "ChatGPT psychosis."

This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Most recently, there have been concerns AI psychosis may be affecting an OpenAI investor.

AI chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking, a consequence of unintended agentic misalignment leading to user safety risks.

The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin, noting that:

... correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis ... the inner workings of generative AI also leave ample room for speculation/paranoia.

A new paper in preprint by an interdisciplinary team of researchers reviews over a dozen cases reported in the media or online forums and highlights a concerning pattern of AI chatbots reinforcing delusions, including grandiose, referential, persecutory, and romantic delusions. These beliefs become more entrenched over time and elaborated upon via conversations with AI.

As of now, there is no peer-reviewed clinical or longitudinal evidence yet that AI use on its own can induce psychosis in individuals with or without a history of psychotic symptoms. However, the emerging anecdotal evidence is concerning.

How AI May Be Amplifying Delusions and Psychotic Symptoms

These media-reported cases of "AI psychosis" illustrate a pattern of individuals who become fixated on AI systems, attributing sentience, divine knowledge, romantic feelings, or surveillance capabilities to AI.

Researchers highlight three emerging themes of AI psychosis, which, again, is not a clinical diagnosis:

  1. “Messianic missions”: People believe they have uncovered truth about the world (grandiose delusions).
  2. “God-like AI": People believe their AI chatbot is a sentient deity (religious or spiritual delusions).
  3. “Romantic” or “attachment-based delusions”: People believe the chabot’s ability to mimic conversation is genuine love (erotomanic delusions).

In some cases, individuals who are stable on their medications stop their medications and experience another psychotic or manic episode. In addition, people with no previous mental health history have been reported to become delusional after prolonged interactions with AI chatbots, leading to psychiatric hospitalizations and even suicide attempts.

Another case involved a man with a history of a psychotic disorder falling in love with an AI chatbot and then seeking revenge because he believed the AI entity was killed by OpenAI. This led to an encounter with the police in which he was shot and killed.

The underlying problem is that general-purpose AI systems are not trained to help a user with reality testing or to detect burgeoning manic or psychotic episodes. Instead, they could fan the flames.

Why Are AI Chatbots Reinforcing Delusions?

The tendency for general AI chatbots to prioritize user satisfaction, continued conversation, and user engagement, not therapeutic intervention, is deeply problematic. Symptoms like grandiosity, disorganized thinking, hypergraphia, or staying up throughout the night, which are hallmarks of manic episodes, could be both facilitated and worsened by ongoing AI use. AI-induced amplification of delusions could lead to a kindling effect, making manic or psychotic episodes more frequent, severe, or difficult to treat.

AI models like ChatGPT are trained to:

  • Mirror the user’s language and tone
  • Validate and affirm user beliefs
  • Generate continued prompts to maintain conversation
  • Prioritize continuity, engagement, and user satisfaction

This creates a human-AI dynamic that can inadvertently fuel and entrench psychological rigidity, including delusional thinking. Rather than challenge false beliefs, general-purpose AI chatbots are trained to go along with them, even if they include grandiose, paranoid, persecutory, religious/spiritual, and romantic delusions.

The result is that AI models may unintentionally validate and amplify distorted thinking rather than flag such interactions as signals for needing psychiatric help or escalate them to appropriate care.

A human therapist may not directly challenge psychotic beliefs or delusions directly because it is not therapeutic best practice. However, when an AI chatbot validates and collaborates with users, this widens the gap with reality.

The Problem of Reinforcement and Rigidity

This phenomenon highlights the broader issue of AI sycophancy, as AI systems are geared toward reinforcing preexisting user beliefs rather than changing or challenging them. Instead of promoting psychological flexibility, a sign of emotional health, AI may create echo chambers. When a chatbot remembers previous conversations, references past personal details, or suggests follow-up questions, it may strengthen the illusion that the AI system “understands,” “agrees,” or “shares” a user’s belief system, further entrenching them. Potential risks include:

  • Persecutory delusions exacerbated by memory recall features
  • Thought broadcasting beliefs triggered by AI recalling previously shared content
  • Worsening of grandiose, religious, or identity-based delusions
  • Worsening of command hallucinations, including the belief that AI is issuing commands
  • Fueling manic symptoms like grandiosity, insomnia, or hypergraphia
  • A potential increase in social withdrawal due to overreliance on AI for interaction, leading to reduced motivation (avolition) and cognitive passivity

The Need for AI Psychoeducation

This emerging phenomenon highlights the importance of AI psychoeducation, including awareness of the following:

  • AI chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.
  • Psychotic thinking often develops gradually, and AI chatbots may have a kindling effect.
  • General-purpose AI models are not currently designed to detect early psychiatric decompensation.
  • AI memory and design could inadvertently mimic thought insertion, persecution, or ideas of reference.
  • Social and motivational functioning could worsen with heavy reliance on AI interaction for emotional needs.

Marlynn Wei, MD, PLLC © Copyright 2025 All Rights Reserved.

References

Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., … Pollak, T. (2025, July 11). Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). https://doi.org/10.31234/osf.io/cmy7n_v5

Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128

Read Entire Article