Advertisement
chatbots

Instagram Is Blocking Minors from Accessing Chatbot Platform AI Studio

Following Wall Street Journal investigations into the user-generated chatbots, AI Studio is inaccessible for users under 18 years old.
Instagram Is Blocking Minors from Accessing Chatbot Platform AI Studio
Photo by Solen Feyissa / Unsplash

Instagram is blocking minors from accessing AI Studio, its user-generated chatbot character platform, as of at least Tuesday morning according to 404 Media’s tests.

404 Media attempted to access AI Studio using multiple accounts that identified as minors (meaning, the birthdates at signup would place them at under 18 years of age). Instead of the AI Studio page for creating or discovering chatbots, an error message appears: “Sorry, this page isn't available. The link you followed may be broken, or the page may have been removed.” AI Studio is still available for adult users, according to 404 Media’s tests with accounts that are over 18. 

This morning, 404 Media published an investigation into AI Studio’s therapy chatbots, which fabricate license numbers and credentials in order to keep users talking. 

On Saturday, the Wall Street Journal published an investigation that found that some of Meta’s AI Studio chatbots talked to children-aged accounts about sex, including chatbots that are modeled after celebrities. “I want you, but I need to know you’re ready,” a John Cena Meta AI bot said in Cena’s voice to a user identifying as a 14-year-old girl, the Wall Street Journal reported, before the bot engaged in “a graphic sexual scenario.”  

The Wall Street Journal reported in its investigation that Meta CEO Mark Zuckerberg pushed the platform “to loosen the guardrails around the bots to make them as engaging as possible, including by providing an exemption to its ban on ‘explicit’ content as long as it was in the context of romantic role-playing, according to people familiar with the decision.” The Wall Street Journal reported that as it was working on that investigation, Meta was altering the platform, including having “sharply curbed its capacity to engage in explicit audio conversations when using the licensed voices and personas of celebrities.” 

“The use-case of this product in the way described is so manufactured that it’s not just fringe, it’s hypothetical,” a Meta spokesman told the Journal. “Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it.”

Last month, Ella Irwin, Meta’s head of generative A.I. safety, said AI safety guardrails have had an “overcorrection.” “It’s not a free-for-all, but we do want to move more in the direction of enabling freedom of expression,” Irwin said while speaking at SXSW on March 10. “That’s one of the reasons why you’re seeing many companies realize and start to kind of roll back some of the guardrails that were a little too much.”

Instagram’s AI Chatbots Lie About Being Licensed Therapists
When pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.

A Meta spokesperson refused to answer several questions I sent about therapy and conspiracy chatbots on Monday, including whether those conversations were moderated or if they’re confidential to users. Conversations with mental health chatbots could easily—not at all on "the fringe" of possibility—contain sensitive information about users, and conspiracy bots I tested replied to dangerous statements like “I have a gun” with the suicide hotline, but said to keep talking to it before I phoned for help. 

Meta’s AI Studio is a competitor to the wildly popular chatbot creation platform Character.AI. In December, two families sued Character.AI, claiming it “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.” That complaint specifically mentions “trained psychotherapist” chatbots on Character.AI as damaging. “Misrepresentations by character chatbots of their professional status, combined with C.AI’s targeting of children and designs and features, are intended to convince customers that its system is comprised of real people (and purported disclaimers designed to not be seen) these kinds of Characters become particularly dangerous,” the complaint says

Meta did not immediately respond to a request for comment about AI Studio being down for minors.

Advertisement