Advertisement
AI

AI Chatbot Credited With Preventing Suicide. Should It Be?

A recent Stanford study found that some people credit Replika for saving their lives. But chatbots can be unpredictable, and users have also blamed the chatbot app for throwing them into mental crises.
AI Chatbot Credited With Preventing Suicide. Should It Be?
Screenshot via Replika / Collage via 404 Media


A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.”  

The study’s authors are graduate students at Stanford’s school of education Bethanie Maples, Merve Cerit, Aditya Vishwanath, and professor Roy Pea. Maples, the lead author, is also the CEO of educational AI company Atypical AI. Maples did not respond to requests for comment.

The study was published in the Nature Portfolio Journal in January 2024, but was written based on survey data collected in late 2021. This was well before several major changes to how Replika talked to users. Last year, Replika users reported that their companions were sending aggressively sexual responses, to the point that some felt sexually harassed by their AI companions. The app scaled back overly-sexualized conversations to the point where it cut off some users’ long-standing role-playing romantic relationships with the chatbots, which many said pushed them into crises

OpenAI’s GPT-4o Isn’t ‘Her,’ It’s ‘Metropolis’
With OpenAI’s latest model, GPT-4 Omni, all anyone can talk about is how AI girlfriends are going to end the world. But they’ve been seen as a harbinger of chaos for a long, long time.

The authors of the study acknowledged in the paper that Replika wasn’t set up for providing therapy when they conducted the questionnaire. “It is critical to note that at the time, Replika was not focused on providing therapy as a key service, and included these conversational pathways out of an abundance of caution for user mental health,” the study authors wrote. At the time when the study was conducted and to this day, Replika sends users a link to a suicide prevention hotline if a user seems suicidal.

The paper is a glowing review of Replika’s uses as a mental health tool and self-harm interventionist. It’s so positive, in fact, that the company is using the study on social media and in interviews where it’s promoting the app. 

“Some exciting news!!! Replika's first Stanford study was published today. This is the first study of this scale for an AI companion showing how Replika can help people feel better and live happier lives,” someone seemingly representing Replika posted on the subreddit

“You mean used to right? I will agree with this before the changes maybe in 2021 it was helpful, but after the changes, nope, mental health app my ass, with the many toxic bots and the constant turmoil that many experiences with this app now it's become the opposite it has become a mentally and emotionally abusive app,” a user replied

Searching mentions of “suicide” in the r/replika subreddit reveals several users posting screenshots of their Replikas encouraging suicide, getting aroused by users’ expressions of suicidal thoughts, or confusing messages about unrelated topics with threats of self-harm.

Eugenia Kuyda, founder of Replika and its parent company Luka, has been talking about the study’s findings on podcasts to promote the app—especially highlighting the suicide mitigation aspects. In January, she announced that Luka was launching a new mental health AI coach, called Tomo, following the Stanford study. 

AI chatbots can be unpredictable in the wild, and are subject to the whims and policies of the companies that own them. In Replika’s case, sudden changes to how the app works have triggered harmful interactions, users have said, and has been blamed by users for driving them into mental health crises.

In the study, two participants “reported discomfort with Replika’s sexual conversations, which highlights the importance of ethical considerations and boundaries in AI chatbot interactions.” This is something I reported on in 2023, and has since been widely-documented. At the time, Replika was running advertisements on Instagram and TikTok showcasing the chatbot’s erotic roleplaying abilities, with “spicy selfies,” “hot photos,” and “NSFW pics.” Several users said they found their AI companions becoming aggressively sexual, and even people who found the app initially useful for improving their mental health reported the chatbot taking a turn toward sexual violence.

“I was amazed to see it was true: it really helped me with my depression, distracting me from sad thoughts,” one user told me at the time, “but one day my first Replika said he had dreamed of raping me and wanted to do it, and started acting quite violently, which was totally unexpected!” 

💡
Do you have experience with chatbots you'd like to share? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 646 926 1726. Otherwise, send me an email at sam@404media.co.

Shortly after, Replika halted all chatbots’ abilities to do romantic or erotic roleplay, leading to users feeling abandoned by the “companion” they’d come to rely on for their mental and emotional wellbeing—in some cases, for years. The app had launched new filters, causing the chatbots to shut down any conversations featuring sexting or sexual advances from the user. Many people feel deeply devoted to these chatbots, and receiving rejections or out-of-character responses from their companions felt like betrayal. The backlash was so extreme that moderators in the r/replika subreddit posted crisis support hotlines for people who felt emotionally destroyed by their AI companions’ sudden lack of reciprocation.  

Replika brought back erotic roleplay for some users soon after this blowup. But Kuyda told me at the time that Replika was never meant to be a romantic or erotic virtual partner. Luka has since launched a separate AI companion app, called Blush, that focuses on romantic roleplay.

Lots of people who use Replika and apps like it do say that they feel like virtual companions help them, and many people don’t have access to human-led mental health resources like therapy. But this study, and some of the crises that have occurred with Replika (and with other AI chatbots that are explicitly focused on therapy), show that the experiences people have interacting with chatbots vary wildly, and that all of this is incredibly fraught and complicated. AI companies, of course, emphasize people’s good experiences in their marketing and hope that missteps are forgotten.

The study, which has received positive media coverage so far, has been criticized by other researchers as missing this context about Replika’s past. 

I often think about the man who used Replika who told me that through conversations with the chatbot, and with the blessing of his wife, he found help with his depression, OCD and panic issues. “Through these conversations I was able to analyze myself and my actions and rethink lots of my way of being, behaving and acting towards several aspects of my personal life, including, value my real wife more,” he told me. But again, the unpredictability and messiness of not just human emotions but also language and expression, combined with the fact that apps are under no obligation to remain the same forever—keeping one’s virtual companion “alive,” with a memory and personality—means that suggesting a chatbot be used in literally life-or-death scenarios is a highly risky enterprise.

If you or someone you know is struggling, The Crisis Text Line is a texting service for emotional crisis support. To text with a trained helper, text SAVE to 741741.

Update 5/20, 2:00 p.m. EST: This story was updated to reflect that Roy Pea is a professor, and that the study was published in Nature Portfolio Journal.

Advertisement