Last week someone sent me an Instagram post of an AI-generated image showing Taylor Swift in Nazi uniform putting a Jewish person in an oven. The Image was posted on November 3, and at the time of writing has nearly 13,000 likes and 1,000 comments. It was posted by an account that has 150,000 followers that regularly shares racist, antisemitic, and transphobic memes.
I emailed Instagram on November 13 to ask if the post violated its policy, which really was more of a rhetorical question with the naive expectation that Instagram say yes, take it down, and maybe even ban the account. Obviously, the post and the account violate Instagram’s policy, which states in its Community Guidelines under the section “Respect other members of the Instagram community” that “It's never OK to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases.”
I did not hear back from Instagram, but I will update the bottom of this post if it gets back to me after I publish this article.
This image, unfortunately, is not a unique post by any means. The biggest social media platforms in the world are filled with hateful, bigoted content, and as someone who has been online and reported on the internet most of my life, I have long lost my ability to be shocked by antisemitic messages and images. Reporting on internet platforms, hateful content, and AI-generated images means that readers, academics, and industry professionals send 404 Media these types of posts all the time. I don’t enjoy looking at them, but that’s exactly what we want people to do because it’s how we find some of our most important stories.
The maddening part of the job is the overall indifference Meta, Instagram’s parent company and a peerless entity in its ability to shape the internet for billions of users and what they see online, has for this type of content.
I am going to be fully transparent with you: This type of content is so common on social media it barely rises to the level of newsworthiness. I am writing this blog for two reasons:
First, to follow up on an article we published in early October showing that there’s a coordinated effort to use AI image generators to flood the internet with hateful content and normalize racism. This post of a Nazi Taylor Swift putting a Jewish person in an oven, as well as many other AI-generated images posted by the same, hugely popular account, show that this is in fact happening.
Second, because Instagram is not responding to my email. I don’t know if it’s not seeing it or if someone at Meta decided that this is not a big enough problem to even merit a response, so now I’m going to post it to the internet and tag a few Taylor Swift fan accounts and see if it motivates one of the richest and most influential companies in the world to do anything at all about its role in normalizing bigotry and enforcing its own policies, even though I have better things to do. This is an experiment, basically. Other posts by the Instagram account say that it was previously banned after repeated violations, but reinstated after an appeal. The same person who runs this Instagram account is also posting the same kind of content to Twitter, but it’s been well established that Twitter is doing little to moderate hateful content on its platform ever since it was acquired by Elon Musk, who promotes antisemitic conspiracy theories himself.
I don’t really have time to act as Instagram’s unpaid moderator, but people should know that this is the state of Instagram’s content moderation in 2023. This even comes years after my colleagues Joseph and Jason obtained a mass of documents showing how many resources Meta puts into moderating its platform, and how Instagram specifically polices content to prevent “PR fires.”
Update: At 1:40PM Eastern Time I saw that Instagram took down the post. It has still not replied to my email.