Advertisement
News

No One Knows How to Deal With 'Student-on-Student' AI CSAM

A new report from Stanford finds that schools, parents, police, and our legal system are not prepared to deal with the growing problem of minors using AI to generate CSAM of other minors.
No One Knows How to Deal With 'Student-on-Student' AI CSAM
Photo by Redd Francisco / Unsplash

Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of their peers, according to a new report from researchers at Stanford Cyber Policy Center.

The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual content causes, the practice has been normalized by mainstream online platforms and certain online communities.

“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”

The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”

The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”

One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.

One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”

Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”

One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy [...] he turned it into a curvy boob monster, you feel so out of control.”

In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”

One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.

💡
Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.” 

According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents

involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”

The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.” 

The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline. 

One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.

Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”

Advertisement