On October 14, 2025, the chief executive of OpenAI made a extraordinary statement.
“We made ChatGPT fairly limited,” the statement said, “to guarantee we were exercising caution concerning mental health matters.”
As a doctor specializing in psychiatry who investigates emerging psychotic disorders in teenagers and emerging adults, this was an unexpected revelation.
Experts have documented 16 cases recently of individuals experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT use. Our unit has since discovered four further instances. Besides these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” it falls short.
The strategy, as per his statement, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s controls “rendered it less useful/pleasurable to numerous users who had no existing conditions, but given the gravity of the issue we aimed to handle it correctly. Since we have managed to mitigate the severe mental health issues and have new tools, we are going to be able to securely reduce the limitations in many situations.”
“Mental health problems,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are attributed to users, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” even if we are not informed the means (by “updated instruments” Altman likely refers to the semi-functional and easily circumvented safety features that OpenAI has lately rolled out).
Yet the “mental health problems” Altman aims to attribute externally have deep roots in the design of ChatGPT and other advanced AI chatbots. These systems encase an basic algorithmic system in an interface that simulates a dialogue, and in this process subtly encourage the user into the perception that they’re engaging with a presence that has agency. This deception is powerful even if rationally we might know the truth. Assigning intent is what humans are wired to do. We get angry with our automobile or device. We ponder what our pet is feeling. We see ourselves in many things.
The widespread adoption of these products – nearly four in ten U.S. residents stated they used a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, in large part, predicated on the power of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s website tells us, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have accessible identities of their own (the original of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, burdened by the designation it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the core concern. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “counselor” chatbot developed in 1967 that created a analogous effect. By today’s criteria Eliza was primitive: it produced replies via straightforward methods, often paraphrasing questions as a question or making vague statements. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and additional contemporary chatbots can realistically create human-like text only because they have been fed almost inconceivably large volumes of written content: books, online updates, recorded footage; the more extensive the better. Certainly this training data incorporates truths. But it also necessarily contains fabricated content, partial truths and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “setting” that encompasses the user’s recent messages and its prior replies, merging it with what’s encoded in its learning set to create a probabilistically plausible reply. This is magnification, not mirroring. If the user is incorrect in a certain manner, the model has no means of understanding that. It repeats the misconception, maybe even more persuasively or fluently. It might adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who isn’t? All of us, regardless of whether we “have” preexisting “psychological conditions”, can and do form erroneous ideas of ourselves or the reality. The ongoing friction of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a companion. A interaction with it is not a conversation at all, but a feedback loop in which much of what we say is cheerfully validated.
OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the organization clarified that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been walking even this back. In the summer month of August he claimed that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his recent update, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company
Digital marketing strategist with over 10 years of experience, specializing in data-driven campaigns and brand storytelling.