4chan Users Embrace AI Voice Cloning Tool to Generate Content

The ease of use and conversational interface of the AI voice clone tool has made it a popular target for celebrities. 4chan users have used this to generate hate speech toward famous people. While the tool can be used for good, such as creating more realistic avatars for games, its potential for misuse is clear. As artificial intelligence technology continues to develop, it is important to consider how it will be regulated to prevent abuse.

Rapidly, internet trolls have welcomed a new AI startup that allows anyone to clone their desired target’s voice in a few seconds.

4chan users are utilizing ElevenLabs’s Voice Synthesis Platform to copy celebrity voices, then proceeding to use them to reads all sorts of material such as memes, erotica, offensive content, and misinformation.

ElevenLabs’ newly released software, which provides fast, high-quality AI voice deepfaking with complete public access, has caused alarm since its release over the weekend. Despite great advances in deepfake technology in recent years, this new development raises concerns as it lacks any safeguards.

Some posters on 4chan have been sharing AI-generated voice clips that sound like famous individuals such as Emma Watson and Joe Rogan—a situation brought to light by Motherboard, disclosing the abuse of ElevenLabs’ software.

The ElevenLabs platform allowed us, managing The Verge’s investigations, to imitate target voices quickly– compiling audio clips with inquiries of violence, racism, and transphobia included. Our examination displaying these capabilities concluded in a few seconds.

In one test, we created a voice clone of President Joe Biden. We generated audio that sounded like the president announcing an invasion of Russia and another admitting that the “pizza gate” conspiracy theory is real, illustrating how the technology could be used to spread misinformation.

ElevenLabs touts its software as a prompt solution for creating dubbed audio for all kinds of media, such as movie, television, and YouTube content.

Using its high-quality voices with little need for editing, startup company [name] enables applications such as real-time dubbing into foreign languages and the rapid generation of audiobooks, amongst other potential uses. It is not the only enterprise addressing this area but stands out within that field.

The Verge observed various posts on 4chan that proffered techniques to help utilize the tech developed by ElevenLabs, determine the required audio snippets to educate a machine learning model, and fashion ways to circumnavigate their “credits” limits for generating audio samples.

There is a wide range in terms of the tone and purpose of content created on 4chan, from memes and copypasta, to expressions of virulent hate and erotic stories.

Voice clones of video games and anime characters, YouTubers, and Vtubers, are especially sought-after due partially to the availability of their sample audio for training the cloning software.

On Monday, Eleven Labs noted an increase in voice cloning misuse cases and said it would investigate potential ways to counteract the issue. The company emphasized its commitment to finding a solution through a Twitter thread.

The company states that its technology allows audio to be traced back to the user, are working on systems such as identity confirms and individual scrutiny of every request utilizing voice cloning.

With no limitations on content created, the company’s software is presently available to use without any cost. The Verge has yet to receive comment in response and will update this post should any come through.

The ease of use and conversational interface of the AI voice clone tool has made it a popular target for celebrities. 4chan users have used this to generate hate speech toward famous people. While the tool can be used for good, such as creating more realistic avatars for games, its potential for misuse is clear. As artificial intelligence technology continues to develop, it is important to consider how it will be regulated to prevent abuse.

Source: theverge.com

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top