It’s now illegal in Michigan to make AI-generated sexual imagery of someone without their written consent. Michigan joins 47 other states in the U.S. that have enacted their own deepfake laws.
Michigan Governor Gretchen Whitmer signed the bipartisan-sponsored House Bills 4047 and its companion bill 4048 on August 26. In a press release, Whitmer specifically called out the sexual uses for deepfakes. “These videos can ruin someone’s reputation, career, and personal life. As such, these bills prohibit the creation of deep fakes that depict individuals in sexual situations and creates sentencing guidelines for the crime,” the press release states. That’s something we’ve seen time and time again with victims of deepfake harassment, who’ve told us over the course of the six years since consumer-level deepfakes first hit the internet that the most popular application of this technology has been carelessness and vindictiveness against the women its users target—and that sexual harassment using AI has always been its most popular use.
Making a deepfake of someone is now a misdemeanor in Michigan, punishable by imprisonment of up to one year and fines up to $3,000 if they “knew or reasonably should have known that the creation, distribution, dissemination, or reproduction of the deep fake would cause physical, emotional, reputational, or economic harm to an individual falsely depicted,” and if the deepfake depicts the target engaging in a sexual act and is identifiable “by a reasonable individual viewing or listening to the deep fake,” the law states.

This is all before the deepfake’s creator posts it online. It escalates to a felony if the person depicted suffers financial loss, the person making the deepfake intended to profit off of it, if that person maintains a website or app for the purposes of creating deepfakes or if they posted it to any website at all, if they intended to “harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to the depicted individual,” or if they have a previous conviction.
The law specifically says that this isn’t to be construed to make platforms liable, but the person making the deepfakes. But we already have federal law in place that makes platforms liable: the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, or TAKE IT DOWN Act, introduced by Ted Cruz in June 2024 and signed into law in May this year, made platforms liable for not moderating deepfakes and imposes extremely short timelines for acting on AI-generated abuse imagery reports from users. That law’s drawn a lot of criticism from civil liberties and online speech activists for being too overbroad; As the Verge pointed out before it became law, because the Trump administration’s FTC is in charge of enforcing it, it could easily become a weapon against all sorts of speech, including constitutionally-protected free speech.
"Platforms that feel confident that they are unlikely to be targeted by the FTC (for example, platforms that are closely aligned with the current administration) may feel emboldened to simply ignore reports of NCII,” the Cyber Civil Rights Initiative told the Verge in April. “Platforms attempting to identify authentic complaints may encounter a sea of false reports that could overwhelm their efforts and jeopardize their ability to operate at all."

“If you do not have perfect technology to identify whatever it is we're calling a deepfake, you are going to get a lot of guessing being done by the social media companies, and you're going to get disproportionate amounts of censorship,” especially for marginalized groups, Kate Ruane, an attorney and director of the Center for Democracy and Technology’s Free Expression Project, told me in June 2024. “For a social media company, it is not rational for them to open themselves up to that risk, right? It's simply not. And so my concern is that any video with any amount of editing, which is like every single TikTok video, is then banned for distribution on those social media sites.”
On top of the TAKE IT DOWN Act, at the state level, deepfakes laws are either pending or enacted in every state except New Mexico and Missouri. In some states, like Wisconsin, the law only protects minors from deepfakes by expanding child sexual abuse imagery laws.
Even as deepfakes legislation seems to finally catch up to the notion that AI-generated sexual abuse imagery is abusive, reporting this kind of harassment to authorities or pursing civil action against one’s own abuser is still difficult, expensive, and re-traumatizing in most cases.