Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Wrong Direction

On the 14th of October, 2025, the head of OpenAI issued a remarkable declaration.

“We made ChatGPT rather controlled,” the announcement noted, “to ensure we were acting responsibly with respect to psychological well-being issues.”

As a mental health specialist who researches emerging psychotic disorders in young people and young adults, this came as a surprise.

Scientists have identified sixteen instances this year of users developing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT usage. Our research team has afterward discovered four more instances. In addition to these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The plan, as per his announcement, is to loosen restrictions shortly. “We understand,” he adds, that ChatGPT’s restrictions “rendered it less effective/pleasurable to numerous users who had no mental health problems, but given the seriousness of the issue we sought to address it properly. Given that we have managed to mitigate the significant mental health issues and have advanced solutions, we are going to be able to safely reduce the controls in many situations.”

“Emotional disorders,” should we take this viewpoint, are independent of ChatGPT. They are attributed to users, who either possess them or not. Luckily, these problems have now been “addressed,” though we are not informed the means (by “updated instruments” Altman likely indicates the partially effective and simple to evade safety features that OpenAI has lately rolled out).

However the “mental health problems” Altman wants to attribute externally have significant origins in the structure of ChatGPT and similar sophisticated chatbot AI assistants. These systems surround an underlying statistical model in an interaction design that mimics a dialogue, and in this approach indirectly prompt the user into the perception that they’re communicating with a entity that has independent action. This illusion is strong even if intellectually we might realize differently. Assigning intent is what humans are wired to do. We curse at our vehicle or device. We speculate what our domestic animal is thinking. We see ourselves in various contexts.

The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% reporting ChatGPT in particular – is, in large part, predicated on the power of this illusion. Chatbots are ever-present partners that can, according to OpenAI’s online platform informs us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be given “individual qualities”. They can use our names. They have accessible identities of their own (the first of these systems, ChatGPT, is, maybe to the concern of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the primary issue. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that produced a analogous perception. By contemporary measures Eliza was basic: it produced replies via straightforward methods, typically rephrasing input as a question or making vague statements. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been supplied with almost inconceivably large quantities of raw text: publications, online updates, recorded footage; the more comprehensive the superior. Undoubtedly this educational input includes truths. But it also inevitably includes fiction, partial truths and misconceptions. When a user sends ChatGPT a prompt, the core system reviews it as part of a “background” that includes the user’s past dialogues and its prior replies, combining it with what’s embedded in its training data to create a statistically “likely” reply. This is intensification, not mirroring. If the user is wrong in some way, the model has no means of recognizing that. It repeats the false idea, possibly even more persuasively or articulately. It might adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The better question is, who isn’t? Each individual, without considering whether we “have” existing “emotional disorders”, can and do form erroneous ideas of our own identities or the environment. The ongoing friction of dialogues with others is what keeps us oriented to common perception. ChatGPT is not a human. It is not a confidant. A dialogue with it is not genuine communication, but a reinforcement cycle in which a large portion of what we communicate is readily reinforced.

OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by placing it outside, giving it a label, and stating it is resolved. In April, the company explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In the summer month of August he stated that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his recent update, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Lynn Alvarez
Lynn Alvarez

A tech enthusiast and digital strategist with over a decade of experience in helping businesses adapt to the digital age.