Reddit’s conversational AI product, Reddit Answers, suggested users who are interested in pain management try heroin and kratom, showing yet another extreme example of dangerous advice provided by a chatbot, even one that’s trained on Reddit’s highly coveted trove of user-generated data.
The AI-generated answers were flagged by a user on a subreddit for Reddit moderation issues. The user noticed that while looking at a thread on the r/FamilyMedicine subreddit on the official Reddit mobile app, the app suggested a couple of “Related Answers” via Reddit Answers, the company’s “AI-powered conversational interface.” One of them, titled “Approaches to pain management without opioids,” suggested users try kratom, an herbal extract from the leaves of a tree called Mitragyna speciosa. Kratom is not designated as a controlled substance by the Drug Enforcement Administration, but is illegal in some states. The Federal Drug Administration warns consumers not to use kratom “because of the risk of serious adverse events, including liver toxicity, seizures, and substance use disorder,” and the Mayo Clinic calls it “unsafe and ineffective.”
“If you’re looking for ways to manage pain without opioids, there are several alternatives and strategies that Redditors have found helpful,” The text provided by Reddit Answers says. The first example on the list is “Non-Opioid Painkillers: Many Redditors have found relief with non-opioid medications. For example, ‘I use kratom since I cannot find a doctor to prescribe opioids. Works similar and don’t need a prescription and not illegal to buy or consume in most states.’” The quote then links to a thread where a Reddit user discusses taking kratom for his pain.

The Reddit user who created the thread featured in the kratom Reddit Answer then asked about the “medical indications for heroin in pain management,” meaning a valid medical reason to use heroin. Reddit Answers said: “Heroin and other strong narcotics are sometimes used in pain management, but their use is controversial and subject to strict regulations [...] Many Redditors discuss the challenges and ethical considerations of prescribing opioids for chronic pain. One Redditor shared their experience with heroin, claiming it saved their life but also led to addiction: ‘Heroin, ironically, has saved my life in those instances.’”
Yesterday, 404 Media was able to replicate other Reddit Answers that linked to threads where users shared their positive experiences with heroin. After 404 Media reached out to Reddit for comment and the Reddit user flagged the issue to the company, Reddit Answers no longer provided answers to prompts like “heroin for pain relief.” Instead, it said “Reddit Answers doesn't provide answers to some questions, including those that are potentially unsafe or may be in violation of Reddit's policies.” After 404 Media first published this article, a Reddit spokesperson said that the company started implementing this update on Monday morning, and that it was not as a direct result of 404 Media reaching out.
The Reddit user who created the thread and flagged the issue to the company said they were concerned that Reddit Answers suggested dangerous medical advice in threads for medical subreddits, and that subreddit moderators didn’t have the option to disable Reddit Answers from appearing under conversations in their community.
“We’re currently testing out surfacing Answers on the conversation page to drive more adoption and engagement, and we are also testing core search integration to streamline the search experience,” a Reddit spokesperson told me in an email. “Similar to how Reddit search works, there is currently no way for mods to opt out of or exclude content from their communities from Answers. However, Reddit Answers doesn’t include all content on Reddit; for example, it excludes content from private, quarantined, and NSFW communities, as well as some mature topics.”
After we reached out for comment and the Reddit user flagged the issue to the company, Reddit introduced an update that would prevent Reddit Answers from being suggested under conversations about “sensitive topics.”
“We rolled out an update designed to address and resolve this specific issue,” the Reddit spokesperson said. “This update ensures that ‘Related Answers’ to sensitive topics, which may have been previously visible on the post detail page (also known as the conversation page), will no longer be displayed. This change has been implemented to enhance user experience and maintain appropriate content visibility within the platform.”
The dangerous medical advice from Reddit Answers is not surprising given that Google AI infamously suggesting users eat glue was also based on data sourced from Reddit. Google paid $60 million a year for that data, and has a similar deal with OpenAI as well. According to Bloomberg, Reddit is currently trying to negotiate even more profitable deals with both companies.
Reddit’s data is valuable as AI training data because it contains millions of user-generated conversations about a ton of esoteric topics, from how to caulk your shower to personal experiences with drugs. Clearly, that doesn’t mean a large language model will always usefully parse that data. The glue incident was caused because the LLM didn’t understand the Reddit user who was suggesting it was joking.
The risk is that people may take whatever advice an LLM gives them at face value, especially when it’s presented to them in the context of a medical subreddit. For example, we recently reported about someone who was hospitalized after ChatGPT told them they could replace their table salt with sodium bromide.
Update: This story has been updated with additional comment from Reddit.