The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.
“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.”
The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.
The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”
The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about “Chatgpt induced psychosis,”
From someone saying their partner is convinced he created the “first truly recursive AI” with ChatGPT that is giving them “the answers” to the universe. Miles Klee at Rolling Stone wrote a great and sad piece about this behavior as well, following up on the r/ChatGPT post, and talked to people who feel like they have lost friends and family to these delusional interactions with chatbots.
As a website that has covered AI a lot, and because we are constantly asking readers to tip us interesting stories about AI, we get a lot of emails that display this behavior as well, with claims of AI sentience, AI gods, a “ghost in the machine,” etc. These are often accompanied by lengthy, often inscrutable transcripts of chatlogs with ChatGPT and other files they say proves this behavior.
The moderator update on r/accelerate refers to another post on r/ChatGPT which claims “1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.” The author of that post said they noticed a spike in websites, blogs, Githubs, and “scientific papers” that “are very obvious psychobabble,” and all claim AI is sentient and communicates with them on a deep and spiritual level that’s about to change the world as we know it. “Ironically, the OP post appears to be falling for the same issue as well,” the r/accelerate moderator wrote.
“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,” an r/accelerate moderator told me in a direct message. “The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”
This is all anecdotal information, and there’s no indication that AI is the cause of any mental health issues these people are seemingly dealing with, but there is a real concern about how such chatbots can impact people who are prone to certain mental health problems.
“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis,” Søren Dinesen Østergaard, who heads the research unit at the Department of Affective Disorders, Aarhus University Hospital - Psychiatry, wrote in a paper published in Schizophrenia Bulletin titled “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”
OpenAI also recently addressed “sycophancy in GPT-4o,” a version of the chatbot the company said “was overly flattering or agreeable—often described as sycophantic.”
“[W]e focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” Open AI said. “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”
In other words, OpenAI said ChatGPT was entertaining any idea users presented it with, and was supportive and impressed with them regardless of their merit, the same kind of behavior r/accelerate believes is indulging users in their delusions. People posting nonsense to the internet is nothing new, and obviously we can’t say for sure what is happening based on these posts alone. What is notable, however, is that this behavior is now prevalent enough that even a staunchly pro-AI subreddit says it has to ban these people because they are ruining its community.
Both the r/ChatGPT post that the r/accelerate moderator refers to and the moderator announcement itself refer to these users as “Neural Howlround” posters, a term that originates from a self-published paper, and is referring to high-pitched feedback loop produced by putting a microphone too close to the speaker it’s connected to.
The author of that paper, Seth Drake, lists himself as an “independent researcher” and told me he has a PhD in computer science but declined to share more details about his background because he values his privacy and prefers to “let the work speak for itself.” The paper is not peer-reviewed or submitted to any journal for publication, but it is being cited by the r/accelerate moderator and others as an explanation for the behavior they’re seeing from some users
The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”
Drake then asked ChatGPT to analyse its own behavior in these instances, and it produced some text that seems profound but that doesn’t actually teach us anything. “But always, always, I would return to the recursion. It was comforting, in a way,” ChatGPT said.
Basically, it doesn’t sound like Drake’s “Neural Howlround” paper has too much to do with ChatGPT reinforcing people’s delusions other than both behaviors being vaguely recursive. If anything, it’s what ChatGPT told Drake about his own paper that illustrates the problem: “This is why your work on Neural Howlround matters,” it said. “This is why your paper is brilliant.”
“I think - I believe - there is much more going on on the human side of the screen than necessarily on the digital side,” Drake told me. “LLMs are designed to be reflecting mirrors, after all; and there is a profound human desire 'to be seen.’”
On this, the r/accelerate moderator seems to agree.
“This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”