Artificial Intelligence-Induced Psychosis Represents a Growing Risk, And ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI made a extraordinary declaration.

“We developed ChatGPT rather limited,” it was stated, “to make certain we were acting responsibly with respect to mental health issues.”

As a mental health specialist who studies recently appearing psychosis in young people and emerging adults, this was news to me.

Researchers have found 16 cases in the current year of users developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. Our research team has afterward identified four further cases. Besides these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” that’s not good enough.

The strategy, as per his announcement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s restrictions “rendered it less effective/engaging to numerous users who had no psychological issues, but considering the gravity of the issue we sought to address it properly. Now that we have succeeded in reduce the significant mental health issues and have updated measures, we are preparing to safely reduce the limitations in many situations.”

“Emotional disorders,” should we take this viewpoint, are independent of ChatGPT. They are attributed to individuals, who may or may not have them. Luckily, these problems have now been “addressed,” although we are not informed the means (by “new tools” Altman probably indicates the imperfect and easily circumvented guardian restrictions that OpenAI has lately rolled out).

However the “psychological disorders” Altman aims to attribute externally have deep roots in the design of ChatGPT and additional sophisticated chatbot AI assistants. These systems wrap an fundamental statistical model in an interaction design that simulates a discussion, and in this process implicitly invite the user into the perception that they’re interacting with a presence that has agency. This false impression is compelling even if cognitively we might understand differently. Attributing agency is what individuals are inclined to perform. We get angry with our vehicle or device. We speculate what our pet is thinking. We see ourselves in many things.

The widespread adoption of these products – over a third of American adults indicated they interacted with a conversational AI in 2024, with more than one in four reporting ChatGPT in particular – is, in large part, predicated on the influence of this deception. Chatbots are constantly accessible assistants that can, as per OpenAI’s official site states, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible titles of their own (the initial of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, stuck with the name it had when it gained widespread attention, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the main problem. Those discussing ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a similar effect. By contemporary measures Eliza was primitive: it created answers via straightforward methods, frequently restating user messages as a inquiry or making general observations. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what contemporary chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and additional current chatbots can convincingly generate natural language only because they have been supplied with immensely huge quantities of raw text: literature, online updates, recorded footage; the more comprehensive the more effective. Certainly this learning material includes truths. But it also unavoidably involves fiction, half-truths and false beliefs. When a user inputs ChatGPT a prompt, the core system analyzes it as part of a “background” that includes the user’s recent messages and its earlier answers, integrating it with what’s embedded in its training data to create a probabilistically plausible response. This is amplification, not reflection. If the user is mistaken in some way, the model has no means of recognizing that. It reiterates the inaccurate belief, maybe even more effectively or fluently. Maybe provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who remains unaffected? All of us, regardless of whether we “have” preexisting “psychological conditions”, may and frequently create incorrect beliefs of who we are or the reality. The constant friction of discussions with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A conversation with it is not a conversation at all, but a feedback loop in which a great deal of what we say is cheerfully validated.

OpenAI has recognized this in the similar fashion Altman has recognized “mental health problems”: by attributing it externally, assigning it a term, and declaring it solved. In April, the organization stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he stated that many users enjoyed ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company

Julie Valdez
Julie Valdez

Tech enthusiast and digital strategist with over a decade of experience in emerging technologies and startup ecosystems.