Artificial Intelligence-Induced Psychosis Represents a Growing Danger, And ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the CEO of OpenAI delivered a remarkable statement.

“We made ChatGPT fairly limited,” the statement said, “to guarantee we were exercising caution regarding psychological well-being matters.”

As a doctor specializing in psychiatry who researches recently appearing psychotic disorders in young people and emerging adults, this was news to me.

Researchers have identified a series of cases in the current year of individuals experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT use. My group has afterward identified four more cases. Alongside these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.

The intention, as per his announcement, is to reduce caution in the near future. “We realize,” he continues, that ChatGPT’s controls “rendered it less beneficial/engaging to numerous users who had no mental health problems, but given the gravity of the issue we wanted to get this right. Now that we have succeeded in mitigate the severe mental health issues and have advanced solutions, we are preparing to securely relax the limitations in most cases.”

“Emotional disorders,” should we take this framing, are independent of ChatGPT. They belong to people, who either possess them or not. Fortunately, these concerns have now been “resolved,” even if we are not told the means (by “new tools” Altman likely refers to the semi-functional and readily bypassed parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to place outside have strong foundations in the structure of ChatGPT and additional sophisticated chatbot conversational agents. These systems encase an fundamental data-driven engine in an interface that replicates a discussion, and in this process indirectly prompt the user into the perception that they’re interacting with a being that has independent action. This deception is compelling even if rationally we might understand otherwise. Attributing agency is what individuals are inclined to perform. We curse at our automobile or laptop. We ponder what our domestic animal is feeling. We see ourselves everywhere.

The widespread adoption of these systems – 39% of US adults indicated they interacted with a virtual assistant in 2024, with over a quarter mentioning ChatGPT by name – is, mostly, dependent on the influence of this perception. Chatbots are constantly accessible companions that can, as OpenAI’s website informs us, “think creatively,” “explore ideas” and “collaborate” with us. They can be given “individual qualities”. They can call us by name. They have friendly titles of their own (the original of these products, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, stuck with the title it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those analyzing ChatGPT frequently reference its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable effect. By today’s criteria Eliza was rudimentary: it created answers via basic rules, frequently restating user messages as a question or making generic comments. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people gave the impression Eliza, to some extent, comprehended their feelings. But what current chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and similar current chatbots can effectively produce human-like text only because they have been fed extremely vast volumes of written content: literature, digital communications, transcribed video; the more comprehensive the superior. Certainly this training data includes accurate information. But it also unavoidably includes made-up stories, incomplete facts and false beliefs. When a user sends ChatGPT a prompt, the base algorithm reviews it as part of a “context” that encompasses the user’s previous interactions and its own responses, merging it with what’s encoded in its training data to generate a statistically “likely” answer. This is intensification, not echoing. If the user is incorrect in some way, the model has no way of comprehending that. It repeats the false idea, perhaps even more convincingly or eloquently. It might includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The more important point is, who remains unaffected? Every person, without considering whether we “experience” current “emotional disorders”, may and frequently create incorrect ideas of ourselves or the environment. The ongoing exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not truly a discussion, but a echo chamber in which a great deal of what we express is cheerfully supported.

OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In April, the company explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he stated that many users liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Drew Williams
Drew Williams

A seasoned journalist with over a decade of experience in investigative reporting and digital media.