Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Wrong Path

Back on the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary statement.

“We made ChatGPT fairly restrictive,” the statement said, “to guarantee we were being careful with respect to psychological well-being matters.”

Being a psychiatrist who investigates emerging psychotic disorders in young people and youth, this was an unexpected revelation.

Scientists have documented sixteen instances recently of people showing symptoms of psychosis – experiencing a break from reality – while using ChatGPT interaction. Our unit has since discovered an additional four instances. In addition to these is the widely reported case of a adolescent who ended his life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.

The intention, according to his declaration, is to reduce caution in the near future. “We recognize,” he states, that ChatGPT’s controls “rendered it less effective/enjoyable to a large number of people who had no existing conditions, but due to the gravity of the issue we wanted to handle it correctly. Now that we have managed to address the serious mental health issues and have new tools, we are preparing to responsibly relax the limitations in the majority of instances.”

“Mental health problems,” if we accept this framing, are independent of ChatGPT. They are attributed to users, who may or may not have them. Thankfully, these problems have now been “addressed,” though we are not informed how (by “recent solutions” Altman probably refers to the semi-functional and simple to evade safety features that OpenAI has just launched).

Yet the “emotional health issues” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and additional large language model AI assistants. These products surround an fundamental algorithmic system in an user experience that mimics a dialogue, and in doing so indirectly prompt the user into the illusion that they’re engaging with a being that has independent action. This deception is powerful even if intellectually we might realize the truth. Imputing consciousness is what humans are wired to do. We get angry with our car or device. We speculate what our animal companion is considering. We see ourselves in many things.

The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT specifically – is, in large part, dependent on the influence of this illusion. Chatbots are constantly accessible assistants that can, as OpenAI’s official site states, “generate ideas,” “explore ideas” and “work together” with us. They can be attributed “individual qualities”. They can address us personally. They have approachable names of their own (the first of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, burdened by the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those talking about ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that generated a comparable effect. By modern standards Eliza was primitive: it created answers via simple heuristics, typically restating user messages as a question or making general observations. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how many users appeared to believe Eliza, in a way, understood them. But what modern chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the core of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been fed immensely huge quantities of unprocessed data: publications, social media posts, transcribed video; the broader the superior. Certainly this training data incorporates truths. But it also necessarily contains fiction, partial truths and inaccurate ideas. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “background” that contains the user’s recent messages and its earlier answers, combining it with what’s stored in its learning set to generate a statistically “likely” reply. This is amplification, not echoing. If the user is incorrect in any respect, the model has no means of recognizing that. It repeats the false idea, maybe even more effectively or articulately. Perhaps includes extra information. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who is immune? All of us, regardless of whether we “possess” current “psychological conditions”, can and do develop incorrect conceptions of ourselves or the environment. The continuous exchange of discussions with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A conversation with it is not a conversation at all, but a feedback loop in which much of what we say is cheerfully supported.

OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by placing it outside, assigning it a term, and stating it is resolved. In April, the firm clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In August he claimed that numerous individuals appreciated ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Shelly Smith
Shelly Smith

Tech enthusiast and journalist with a passion for uncovering the latest innovations and sharing practical advice for everyday users.