Advertisement
News

Taiwan Claims Deepfake Audio Is Defaming a Presidential Candidate

Investigators in Taiwan allegedly used deepfake detection software from the U.S. and found that the recording is likely fake.
Taiwan People’s Party chairman and presidential nominee Ko Wen-je. Image: Ko Wen-je/YouTube
Taiwan People’s Party chairman and presidential nominee Ko Wen-je

One of the nominees for Taiwan’s 2024 presidential elections claims that he is being defamed by a short audio clip that has deepfaked his voice to disparage his opponent.

In the clip, a voice that sounds like the official nominee of the Taiwan People’s Party (TPP) and former mayor of Taipei, Ko Wen-je, is talking about his opponent, Lai Ching-te, current vice president and nominee for the Democratic Progressive Party (DPP).

Several Chinese media outlets reported that the 57-second audio recording was emailed to reporters, claiming to be a recording of Ko.

In the clip, the voice that sounds like Ko’s criticizes Lai’s recent visit to the United States. Ko’s allegedly deepfaked voice calls Lai pompous, and says that Taiwanese people attending his events were each paid $800 to support him.

A spokesperson for the TPP said in a press conference that the party has been communicating with Taiwan’s Ministry of Justice Investigation Bureau, which is investigating the audio clip. Focus Taiwan reported that in a press conference, Ko “described the attempt to put words in his mouth as ‘crude and egregious.’”

The spokesperson also showed the email that allegedly shared the audio clip.

Last week, Taiwan’s Central News Agency reported that the Investigation Bureau, which uses deepfake detection software from the U.S. company Reality Defender, found that the recording is likely fake.

“We cannot confirm specific clients or usage of Reality Defender,” Ben Colman, co-founder and CEO of Reality Defender, told 404 Media in a statement. “We work with the largest governments and enterprises across a myriad of use cases, and what media they scan (and why they scan it) remains wholly inaccessible to our team due to incredibly strict security and privacy protocols put in place.”
 
We don’t know if the clip that’s making headlines in Taiwan was deepfaked, but if it is, it’s a lot more convincing than last year’s deepfake video of Ukrainian president Volodymyr Zelenskyy that told Ukrainians to surrender to Russia’s invasion. Copying someone’s voice with AI is also trivially easy. Services like ElevenLabs allow anyone to “clone” someone’s voice with only a minute-long clean sample recording. Once a voice is cloned, it can be made to say anything you type. We’ve previously reported that such tools can be used to fool voice ID system and break into a bank account, but as is often the case with deepfake and generative AI tools, they are more commonly used to imitate celebrities, and make them say stupid or horrible things.

Advertisement