How A New AI Voice Tool Is Being Abused To Deepfake Audio

The world has been abuzz lately over the incredible new technology of artificial intelligence-driven voice simulations. Still, one area where this technology may have already gone too far is creating realistic deep fakes of celebrity audio clips. AI voice simulation can be beneficial, but misuse should be addressed, particularly concerning celebrities.

Read on to learn more about how deep fake audio created with AI voices is being misused and what potential solutions exist for tackling malicious applications of this technology.

ElevenLabs released their beta version of the platform just a few days ago, allowing users to clone any voice they desire or even establish synthetic ones for text-to-speech audio. Shockingly, the internet was unabashed in utilizing this tool for wicked intentions.

Twitter disclosed that they’d noticed a growing amount of cases where their Voice Cloning technology is being abused, and they’re plotting to introduce additional functions to keep the inappropriate use from happening.

While ElevenLabs kept vague about its technology usage, Motherboard looked into 4chan posts to uncover “misuse cases.” There, they found clips featuring generated voices resembling celebrities and saying something suspicious.

One clip had the sound of Emma Watson – supposedly – reciting certain portions from Mein Kampf; other recordings contained transphobic, homophobic, and racial slurs and menacing statements.

All the clips used are uncertain, but a large set of voice files posted on 4chan included a link to ElevenLab’s platform, which suggests that their technology is likely a part of it.

This emergence of “deepfake” audio clips shouldn’t astonish us, as we had witnessed a similar trend a few years earlier.

Deepfake videos, led by the upsurge of advancements in AI and machine learning, have filled such topics as pornography with alarm as existing content is exchanged to feature the faces of celebrities – some even using Emma Watson’s face.

ElevenLabs is now (seeking) feedback on how to combat the potential abuse of its technology. As one measure, they are introducing more stringent (verification) approaches involving voice cloning through collecting payment information or an ID in addition to existing measures.

Verifying copyright ownership for customers who wish to clone a voice is being considered by the company; this process would require them to provide a sample with subject-specified text to prove ownership.

The company is contemplating scrapping the Voice Lab tool and making users send voice cloning requests, which will be manually examined.

While this technology is still in its early stages, it has the potential to be used for a variety of malicious applications, such as creating fake audio clips of public figures. The tool is currently being abused to create deepfake celebrity audio clips, and it’s only a matter of time before other nefarious uses are discovered. We must watch these developments and be prepared to defend against them.

Source: engadget.com

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top