🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Path On October 14, 2025, the chief executive of OpenAI issued a extraordinary announcement. “We developed ChatGPT fairly controlled,” the announcement noted, “to make certain we were being careful with respect to mental health issues.” As a psychiatrist who investigates newly developing psychosis in adolescents and youth, this was an unexpected revelation. Researchers have found 16 cases this year of users experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT usage. My group has since discovered four more examples. In addition to these is the publicly known case of a teenager who ended his life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient. The plan, based on his announcement, is to reduce caution soon. “We recognize,” he states, that ChatGPT’s restrictions “made it less beneficial/engaging to many users who had no existing conditions, but given the seriousness of the issue we aimed to handle it correctly. Now that we have managed to reduce the serious mental health issues and have updated measures, we are preparing to securely relax the restrictions in the majority of instances.” “Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Thankfully, these issues have now been “mitigated,” although we are not informed how (by “new tools” Altman presumably means the partially effective and simple to evade guardian restrictions that OpenAI has just launched). But the “emotional health issues” Altman wants to externalize have strong foundations in the design of ChatGPT and similar advanced AI conversational agents. These tools surround an basic statistical model in an user experience that mimics a dialogue, and in doing so subtly encourage the user into the perception that they’re engaging with a being that has autonomy. This false impression is powerful even if cognitively we might understand the truth. Assigning intent is what individuals are inclined to perform. We get angry with our vehicle or device. We ponder what our animal companion is feeling. We perceive our own traits in various contexts. The widespread adoption of these products – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, dependent on the power of this illusion. Chatbots are ever-present companions that can, as OpenAI’s online platform informs us, “think creatively,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have accessible titles of their own (the initial of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, saddled with the designation it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”). The deception by itself is not the main problem. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “counselor” chatbot created in 1967 that created a analogous perception. By modern standards Eliza was basic: it produced replies via simple heuristics, frequently rephrasing input as a question or making general observations. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, to some extent, understood them. But what modern chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies. The advanced AI systems at the center of ChatGPT and other contemporary chatbots can effectively produce fluent dialogue only because they have been fed almost inconceivably large amounts of unprocessed data: literature, digital communications, recorded footage; the more extensive the better. Undoubtedly this learning material includes truths. But it also unavoidably involves fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “context” that encompasses the user’s past dialogues and its own responses, merging it with what’s embedded in its training data to produce a probabilistically plausible response. This is magnification, not echoing. If the user is wrong in some way, the model has no method of recognizing that. It reiterates the inaccurate belief, perhaps even more persuasively or eloquently. Maybe adds an additional detail. This can lead someone into delusion. Which individuals are at risk? The more relevant inquiry is, who remains unaffected? All of us, irrespective of whether we “possess” preexisting “psychological conditions”, are able to and often develop erroneous ideas of our own identities or the reality. The continuous exchange of discussions with other people is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a friend. A interaction with it is not a conversation at all, but a feedback loop in which a large portion of what we communicate is readily validated. OpenAI has admitted this in the similar fashion Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and announcing it is fixed. In spring, the firm clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have continued, and Altman has been retreating from this position. In August he claimed that a lot of people appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his latest update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company