In the first week of January, Kylie Brewer started getting strange messages.
“Someone has a only fans page set up in your name with this same profile,” one direct message from a stranger on TikTok said. “Do you have 2 accounts or is someone pretending to be you,” another said. And from a friend: “Hey girl I hate to tell you this, but I think there’s some picture of you going around. Maybe AI or deep fake but they don’t look real. Uncanny valley kind of but either way I’m sorry.”
It was the first week of January, during the frenzy of people using xAI’s chatbot and image generator Grok to create images of women and children partially or fully nude in sexually explicit scenarios. Between the last week of 2025 and the first week of 2026, Grok generated about three million sexualized images, including 23,000 that appear to depict children, according to researchers at the Center for Countering Digital Hate. The UK’s Ofcom and several attorneys general have since launched or demanded investigations into X and Grok. Earlier this month, police raided X’s offices in France as part of the government’s investigation into child sexual abuse material on the platform.
Messages from strangers and acquaintances are often the first way targets of abuse imagery learn that images of them are spreading online. Not only is the material disturbing itself — everyone, it seems, has already seen it. Someone was making sexually explicit images of Brewer, and then, according to her followers who sent her screenshots and links to the account, were uploading them to an OnlyFans and charging a subscription fee for them.
“It was the most dejected that I've ever felt,” Brewer told me in a phone call. “I was like, let's say I tracked this person down. Someone else could just go into X and use Grok and do the exact same thing with different pictures, right?”
@kylie.brewer Please help me raise awareness and warn other women. We NEED to regulate AI… it’s getting too dangerous #leftist #humanrights #lgbtq #ai #saawareness
♬ original sound - Kylie Brewer💝
Brewer is a content creator whose work focuses on feminism, history, and education about those topics. She’s no stranger to online harassment. Being an outspoken woman about these and other issues through a leftist lens means she’s faced the brunt of large-scale harassment campaigns primarily from the “manosphere,” including “red pilled” incels and right-wing influencers with podcasts for years. But when people messaged her in early January about finding an OnlyFans page in her name, featuring her likeness, it felt like an escalation.
One of the AI generated images was based on a photo of her in a swimsuit from her Instagram, she said. Someone used AI to remove her clothing in the original photo. “My eyes look weird, and my hands are covering my face so it kind of looks like my face got distorted, and they very clearly tried to give me larger breasts, where it does not look like anything realistic at all,” Brewer said. Another image showed her in a seductive pose, kneeling or crawling, but wasn’t based on anything she’s ever posted online. Unlike the “nudify” one that relied on Grok, it seemed to be a new image made with a prompt or a combination of images.
Many of the people messaging her about the fake OnlyFans account were men trying to get access to it. By the time she clicked a link one of them sent of the account, it was already gone. OnlyFans prohibits deepfakes and impersonation accounts. The platform did not respond to a request for comment. But OnlyFans isn’t the only platform where this can happen: Non-consensual deepfake makers use platforms like Patreon to monetize abusive imagery of real people.
“I think that people assume, because the pictures aren't real, that it's not as damaging,” Brewer told me. “But if anything, this was worse because it just fills you with such a sense of lack of control and fear that they could do this to anyone. Children, women, literally anyone, someone could take a picture of you at the store, going grocery shopping, and ask AI or whatever to do this.”
A lack of control is something many targets of synthetic abuse imagery say they feel — and it can be especially intense for people who’ve experienced sexual abuse in real life. In 2023, after becoming the target of deepfake abuse imagery, popular Twitch streamer QTCinderella told me seeing sexual deepfakes of herself resurfaced past trauma. “You feel so violated…I was sexually assaulted as a child, and it was the same feeling,” she said at the time. “Like, where you feel guilty, you feel dirty, you feel like, ‘what just happened?’ And it’s bizarre that it makes that resurface. I genuinely didn’t realize it would.”
Other targets of deepfake harassment also feel like this could happen anytime, anywhere, whether you’re at the grocery store or posting photos of your body online. For some, it makes it harder to get jobs or have a social life; the fear that anyone could be your harasser is constant. “It's made me incredibly wary of men, which I know isn't fair, but [my harasser] could literally be anyone,” Joanne Chew, another woman who dealt with severe deepfake harassment for months, told me last year. “And there are a lot of men out there who don't see the issue. They wonder why we aren't flattered for the attention.”

Brewer’s income is dependent on being visible online as a content creator. Logging off isn’t an option. And even for people who aren’t dependent on TikTok or Instagram for their income, removing oneself from online life is a painful and isolating tradeoff that they shouldn’t have to make to avoid being harassed. Often, minimizing one’s presence and accomplishments doesn’t even stop the harassment.
Since AI-generated face-swapping algorithms became accessible at the consumer level in late 2017, the technology has only gotten better, more realistic, and its effects on targets harder to combat. It was always used for this purpose: to shame and humiliate women online. Over the years, various laws have attempted to protect victims or hold platforms accountable for non-consensual deepfakes, but most of them have either fallen short or present new risks of censorship and marginalize legal, consensual sexual speech and content online. The TAKE IT DOWN Act, championed by Ted Cruz and Melania Trump, passed into law in April 2025 as the first federal level legislation to address deepfakes; the law imposes a strict 48-hour turnaround requirement on platforms to remove reported content. President Donald Trump said that he would use the law, because “nobody gets treated worse online” than him. And in January, the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act passed the Senate and is headed to the House. The act would allow targets of deepfake harassment to sue the people making the content. But taking someone to court has always been a major barrier to everyday people experiencing harassment online; It’s expensive and time consuming even if they can pinpoint their abuser. In many cases, including Brewer’s, this is impossible—it could be an army of people set to make her life miserable.
“It feels like any remote sense of privacy and protection that you could have as a woman is completely gone and that no one cares,” Brewer said. “It’s genuinely such a dehumanizing and horrible experience that I wouldn't wish on anyone... I’m hoping also, as there's more visibility that comes with this, maybe there’s more support, because it definitely is a very lonely and terrible place to be — on the internet as a woman right now.”