Advertisement
News

Sora 2 Watermark Removers Flood the Web

Bypassing Sora 2's rudimentary safety features is easy and experts worry it'll lead to a new era of scams and disinformation.
Sora 2 Watermark Removers Flood the Web
Photo by Mariia Shalabaieva / Unsplash

Sora 2, Open AI’s new AI video generator, puts a visual watermark on every video it generates. But the little cartoon-eyed cloud logo meant to help people distinguish between reality and AI-generated bullshit is easy to remove and there are half a dozen websites that will help anyone do it in a few minutes.

A simple search for “sora watermark” on any social media site will return links to places where a user can upload a Sora 2 video and remove the watermark. 404 Media tested three of these websites, and they all seamlessly removed the watermark from the video in a matter of seconds.

Hany Farid, a UC Berkeley professor and an expert on digitally manipulated images, said he’s not shocked at how fast people were able to remove watermarks from Sora 2 videos. “It was predictable,” he said. “Sora isn’t the first AI model to add visible watermarks and this isn’t the first time that within hours of these models being released, someone released code or a service to remove these watermarks.”

Hours after its release on September 30, Sora 2 emerged as a copyright violation machine full of Nazi SpongeBobs and criminal Pickachus. Open AI has tamped down on that kind of content after the initial thrill of seeing Rick and Morty shill for crypto sent people scrambling to download the app. Now that the novelty is wearing off we’re grappling with the unpleasant fact that Open AI’s new tool is very good at making realistic videos that are hard to distinguish from reality.

To help us all from going mad, Open AI has offered watermarks. “At launch, all outputs carry a visible watermark,” Open AI said in a blog post. “All Sora videos also embed C2PA metadata—an industry-standard signature—and we maintain internal reverse-image and audio search tools that can trace videos back to Sora with high accuracy, building on successful systems from ChatGPT image generation and Sora 1.”

But experts say that those safeguards fall short. “A watermark (visual label) is not enough to prevent persistent nefarious users attempting to trick folks with AI generated content from Sora,” Rachel Tobac, CEO of SocialProof Security, told 404 Media.

Tobac also said she’s seen tools that dismantle AI-generated metadata by altering the content’s hue and brightness. “Unfortunately we are seeing these Watermark and Metadata Removal tools easily break that standard,” Tobac said of the C2PA metadata. “This standard will still work for less persistent AI slop generators, but will not stop dedicated bad actors from tricking people.”

As an example of how much trouble we’re in, Tobac pointed to an AI-generated video that went viral on TikTok over the weekend she called “stranger husband train.” In the video, a woman riding the subway cutely proposes marriage to a complete stranger sitting next to her. He accepts. One instance of the video has been liked almost 5 million times on TikTok. It didn’t have a watermark.

“We're already seeing relatively harmless AI Sora slop confusing even the savviest of Gen Z and Millennial users,” Tobac said. “With many typically-savvy commenters naming how ‘cooked’ we are because they believed it was real. This type of viral AI slop account will attempt to make as much money from the creator fund as possible before social media companies learn they need to invest in detecting and limiting AI slop, before their platform succumbs to the Slop Fest.”

But it’s not just the slop. It’s also the scams. “At its most innocuous, AI generated content without watermarking and metadata accelerates the enshittification of the internet and tricks people with inflammatory content,” Tobac said. “At its most malignant, AI generated content without watermarking and metadata could lead to every day people losing their savings in scams, becoming even more disenfranchised during election season, could tank a stock price within a few hours, could increase the tension between differing groups of people, and could inspire violence, terrorism, stampede or panic amongst everyday folks.”

Tobac showed 404 Media a few horrifying videos to illustrate her point. In one, a child pleads with their parents for bail money. In another, a woman tells the local news she’s going home after trying to vote because her polling place was shut down. In a third, Sam Altman tells a room that he can no longer keep Open AI afloat because the copyright cases have become too much to handle. All of the videos looked real. None of them have a watermark.

“All of these examples have one thing in common,” Tobac said. “They’re attempting to generate AI content for use off Sora 2’s platform on other social media to create mass or targeted confusion, harm, scams, dangerous action, or fear for everyday folk who don’t understand how believable AI can look now in 2025.”

Farid told 404 Media that Sora 2 wasn’t uniquely dangerous. It’s just one among many. “It is part of a continuum of AI models being able to create images and video that are passing through the uncanny valley,” he said. “Having said that, both Veo 3 and Sora 2 are big steps in our ability to create highly visual compelling videos. And, it seems likely that the same types of abuses we’ve seen in the past will be supercharged by these new powerful tools.”

According to Farid, Open AI is decent at employing strategies like watermarks, content credentials, and semantic guardrails to manage malicious use. But it doesn’t matter. “It is just a matter of time before someone else releases a model without these safeguards,” he said.

Both Tobac and Farid said that the ease at which people can remove watermarks from AI-generated content wasn’t a reason to stop using watermarks. “Using a watermark is the bare minimum for an organization attempting to minimize the harm that their AI video and audio tools create,” Tobac said, but she thinks the companies need to go further. “We will need to see a broad partnership between AI and Social Media companies to build in detection for scams/harmful content and AI labeling not only on the AI generation side, but also on the upload side for social media platforms. Social Media companies will also need to build large teams to manage the likely influx of AI generated social media video and audio content to detect and limit the reach for scammy and harmful content.”

Tech companies have, historically, been bad at that kind of moderation at scale.

“I’d like to know what OpenAI is doing to respond to how people are finding ways around their safeguards,” Farid said. “We are seeing, for example, Sora not allowing videos that reference Hitler in the prompt, but then users are finding workarounds by simply describing what Hitler looks like (e.g., black hair, black military outfit and a Charlie Chaplin mustache.) Will they adapt and strengthen their guardrails? Will they ban users from their platforms? If they are not aggressive here, then this is going to end badly for us all.”

Open AI did not respond to 404 Media’s request for comment.

Advertisement