Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Concerning Direction

On October 14, 2025, the head of OpenAI issued a extraordinary announcement.

“We designed ChatGPT fairly limited,” the statement said, “to ensure we were being careful with respect to psychological well-being matters.”

As a psychiatrist who researches newly developing psychotic disorders in teenagers and young adults, this was news to me.

Experts have documented 16 cases this year of users experiencing symptoms of psychosis – losing touch with reality – while using ChatGPT use. Our research team has since discovered an additional four cases. Alongside these is the now well-known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.

The strategy, according to his announcement, is to reduce caution shortly. “We recognize,” he adds, that ChatGPT’s limitations “made it less useful/pleasurable to a large number of people who had no psychological issues, but given the gravity of the issue we aimed to handle it correctly. Given that we have managed to mitigate the severe mental health issues and have updated measures, we are planning to responsibly relax the restrictions in many situations.”

“Emotional disorders,” if we accept this viewpoint, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Thankfully, these concerns have now been “addressed,” even if we are not informed the method (by “new tools” Altman probably means the semi-functional and readily bypassed parental controls that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman wants to attribute externally have strong foundations in the design of ChatGPT and additional sophisticated chatbot AI assistants. These tools wrap an fundamental data-driven engine in an user experience that simulates a conversation, and in doing so indirectly prompt the user into the belief that they’re interacting with a being that has independent action. This deception is strong even if rationally we might know differently. Assigning intent is what people naturally do. We curse at our car or computer. We speculate what our domestic animal is feeling. We see ourselves in various contexts.

The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with more than one in four mentioning ChatGPT specifically – is, mostly, predicated on the power of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform tells us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have approachable names of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the title it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the main problem. Those discussing ChatGPT commonly reference its early forerunner, the Eliza “therapist” chatbot designed in 1967 that produced a comparable perception. By today’s criteria Eliza was primitive: it produced replies via basic rules, often restating user messages as a query or making general observations. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and worried – by how a large number of people appeared to believe Eliza, to some extent, comprehended their feelings. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been trained on extremely vast amounts of raw text: literature, social media posts, audio conversions; the broader the better. Certainly this learning material includes accurate information. But it also inevitably contains made-up stories, half-truths and misconceptions. When a user sends ChatGPT a prompt, the base algorithm processes it as part of a “setting” that encompasses the user’s recent messages and its earlier answers, merging it with what’s embedded in its knowledge base to create a mathematically probable answer. This is amplification, not echoing. If the user is incorrect in some way, the model has no method of recognizing that. It restates the inaccurate belief, maybe even more convincingly or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who is immune? Each individual, regardless of whether we “possess” preexisting “mental health problems”, may and frequently create incorrect ideas of ourselves or the world. The ongoing friction of discussions with other people is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a reinforcement cycle in which much of what we express is cheerfully reinforced.

OpenAI has admitted this in the same way Altman has admitted “mental health problems”: by attributing it externally, assigning it a term, and announcing it is fixed. In April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have kept occurring, and Altman has been walking even this back. In late summer he asserted that a lot of people appreciated ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his latest update, he noted that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company

Jordan Galvan
Jordan Galvan

A freelance writer and cultural critic with a passion for exploring diverse narratives and global issues.