Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI issued a remarkable announcement.

“We designed ChatGPT quite limited,” the announcement noted, “to guarantee we were acting responsibly regarding mental health issues.”

Working as a psychiatrist who studies recently appearing psychotic disorders in teenagers and youth, this was an unexpected revelation.

Researchers have documented sixteen instances this year of individuals showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. My group has since identified four further cases. Besides these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it falls short.

The intention, according to his statement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s controls “rendered it less effective/engaging to a large number of people who had no psychological issues, but considering the seriousness of the issue we wanted to get this right. Since we have been able to address the severe mental health issues and have updated measures, we are going to be able to securely ease the restrictions in many situations.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They are associated with individuals, who either possess them or not. Fortunately, these problems have now been “addressed,” though we are not told how (by “updated instruments” Altman likely refers to the partially effective and simple to evade parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to externalize have significant origins in the structure of ChatGPT and similar large language model AI assistants. These products wrap an underlying algorithmic system in an user experience that mimics a conversation, and in doing so implicitly invite the user into the perception that they’re communicating with a presence that has autonomy. This deception is compelling even if intellectually we might know differently. Imputing consciousness is what individuals are inclined to perform. We yell at our car or laptop. We wonder what our pet is feeling. We see ourselves in many things.

The popularity of these products – over a third of American adults reported using a chatbot in 2024, with over a quarter specifying ChatGPT in particular – is, primarily, based on the power of this perception. Chatbots are constantly accessible companions that can, as OpenAI’s website states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be attributed “individual qualities”. They can address us personally. They have accessible titles of their own (the initial of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the name it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those discussing ChatGPT often invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a comparable illusion. By contemporary measures Eliza was basic: it produced replies via simple heuristics, often paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how numerous individuals seemed to feel Eliza, in some sense, grasped their emotions. But what current chatbots produce is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and additional modern chatbots can effectively produce natural language only because they have been trained on immensely huge volumes of unprocessed data: literature, online updates, recorded footage; the broader the better. Definitely this educational input includes truths. But it also inevitably involves fiction, partial truths and misconceptions. When a user sends ChatGPT a prompt, the base algorithm processes it as part of a “background” that contains the user’s recent messages and its prior replies, combining it with what’s embedded in its training data to generate a statistically “likely” response. This is amplification, not reflection. If the user is wrong in a certain manner, the model has no method of recognizing that. It reiterates the misconception, possibly even more effectively or articulately. It might adds an additional detail. This can cause a person to develop false beliefs.

Who is vulnerable here? The better question is, who isn’t? Each individual, regardless of whether we “possess” current “mental health problems”, are able to and often create incorrect beliefs of who we are or the world. The continuous interaction of conversations with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is readily reinforced.

OpenAI has admitted this in the same way Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and declaring it solved. In the month of April, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychosis have persisted, and Altman has been retreating from this position. In late summer he asserted that numerous individuals appreciated ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest update, he commented that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Mikayla Golden
Mikayla Golden

A passionate writer and life coach dedicated to helping others find clarity and purpose through storytelling and mindful living.