Future of Voice Cloning: 8 use cases.
Jarno Duursma
LinkedIn Top Voice AI | Keynote speaker | Artificial Intelligence | 15 yrs experience | Future Focus | Tech Expert | Generative AI | ChatGPT | Deepfakes | Personal Growth | Spreker
?? This is a?signal?from the future.?You can use the?signal?to reflect on yourself, society or your business.?
It appears that I'm not connected to everyone of you on LinkedIn. So let's connect, shall we? To connect, click on my profile picture or this link.
Thank you for taking the time to read my?signals?from the future?newsletter. If you'd like to support me and the time and effort I put into creating this newsletter, you can help by liking this article on LinkedIn, tagging a person in the comments, or sending it to a friend or colleague via email. Every little bit helps, and I appreciate your support. Thanks again for reading!
kind regards, Jarno Duursma
ps:?You can enhance your event with a presentation from me on the cutting-edge topics of ChatGPT, AI, and synthetic media. Simply complete?this form?to book me and bring your event to the next level.
Voice Cloning
As a tech-researcher, I was among the early adopters to experiment with voice cloning in 2019. Utilizing the power of the Descript software tool, I recorded 30 minutes of audio to create a digital replica of my voice. Fast forward to present day, advancements in voice cloning technology have significantly reduced the amount of audio needed to generate an accurate voice model, with companies like Google's Tacotron and Microsoft's Vall-E claiming to only require a few seconds of audio to clone a person's voice.
Voice cloning has opened up a world of possibilities, allowing us to generate human-like speech with the aid of a keyboard. There is, however, also a big dark side to it.
In this edition of my newsletter, we delve into 8 use cases for this rapidly evolving technology. And i'll give you my thoughts on the dark side of this technology.
Voice clone after illness or injury
Voice cloning technology can help people who have lost their natural voice due to injury or illness. Realistic synthetic voices can mimic the speech patterns, intonations, and nuances of the original speaker. They can use a keyboard to produce their own speech.
For celebrities
Thanks to voice cloning, celebrities can now have their voices replicated. This opens up new opportunities for companies to "hire" these digital voices to narrate their content, from press releases to website copy, all with the unique tone of the celebrity in question. With cloned voices, celebrities can also easily produce commercials, extending their business model with little effort.
A new brand experience in customer service
AI-powered voice cloning could be used by creating virtual customer service agents that sound like a celebrity. Imagine being greeted by the voice of Morgan Freeman as your customer service representative. This kind of human touch can elevate the customer experience to new heights.
Audiobooks, redefined
For authors, voice cloning technology streamlines the audiobook production process by eliminating the need to invest time and effort into recording their own voice in a studio.
Correcting minor imperfections
Generative AI-powered synthetic speech will assist actors and actresses in correcting minor imperfections in their recorded films. Voice Cloning can also be utilized to generate new dialogues for characters in older productions.
领英推荐
Advertisement
In the future, personalized audio advertisements will be available on Spotify or YouTube, featuring your first name, last name, and preferences, and even the voice of your favorite artist.
Newsreaders
With Generative AI-powered voice cloning, newsreaders can read the news 24 hours a day, as long as they are fed new texts.
Read a book
Additionally, children can choose to have a loved one, such as a parent or grandparent, read a book to them by selecting a read-aloud book on their tablet and choosing the cloned voice.
Dark Side of voice cloning
It is obvious that this voice cloning technology also can be used for criminal purposes. If in the near future someone's voice can be copied and used to make them say anything, this presents enormous opportunities for criminal activity.
Identity Fraud
The human voice is important for communication as well as identification. For example, tech blog Gizmodo described a situation where the CEO of a UK energy company thought he was speaking on the phone with the CEO of his parent company in Germany. The caller, with a German accent, convinced him to transfer €220,000 to a Hungarian account. The voice of the German CEO was actually created by an artificial intelligence system.
This type of fraud is likely to occur more frequently in the future. The technology is not yet perfect, but some financial managers may still transfer money based on a voicemail. Voice cloning fraud is still a very unknown phenomenon, just like phishing was when email was first emerging.
Blackmail or reputation damage
Politicians, celebrities, whistleblowers, journalists, and justice and police employees are naturally vulnerable to blackmail or reputation damage when voice cloning software is perfect. Their voice can then be mimicked and thus words can be put in their mouth. This can be used deliberately to cause someone to suffer reputation damage or to be blackmailed with reputation damage.
Pocket phone call
Causing reputation damage is therefore much easier with voice cloning software. For example, it sometimes happens that someone accidentally leaves their phone ringing in a jacket pocket or pants pocket. The recipient can then unintentionally listen to the voices and sounds that the sender's microphone records.
Evil-doers could generate fake versions of such telephone sound recordings, sending them from an anonymous number and suggesting with the recording that the recipient is listening to something that is not intended for their ears. For example, a journalist receives a “pocket phone call” with a revelation from a politician. A possible potential investor in a company receives a “pocket voicemail” from the existing director, speaking condescendingly about this potential investor. A fake recorded message can be sent via smartphone to a colleague, manager, or acquaintance.
Social engineering
There are many future scenarios thinkable where fake recordings of someone's voice are misused, for example to extract personal information for later cyber-hacking or a direct fraud attack. "Hey hello, this is Jarno, I'm calling from another phone because my iPhone is broken and now I'm standing at the door of our new office, what's the entrance code again? That's also in my iPhone. Just send it to me, thank you!"
It is clear: this technology brings opportunities but also threats. It's good to be aware of this!
Until the next "Signals from the future"!
Quartermaster in the United Arab Emirates with Holland Legal Services
1 年In the UAE, some banks use this "my voice is my password" system where you login into your bank account with your own voice. Really handy and modern. Until......
Retired and active as citizen scientist
2 年In my opinion, one of the most worrisome developments of speech synthesis is the facilitation of presentational attacks in which synthetic speech challenge-response protections are bypassed.
Generative AI, Deepfakes, Synthetic Media / Keynote Speaker / Author Real Fake - Echt Nep / Serial Entrepeneur / Strategic Advisor on Disruptive Innovations
2 年ElevenLabs maakt het mogelijk om stemmen digitaal te clonen. 4chan gebruikt de technologie om Emma Watson voor te laten lezen uit Mein Kampf. Je verwacht het niet... https://twitter.com/tintinspuppy/status/1620328643326251008
senior adviseur participatie en communicatie (06-51607251)
2 年Hi Jarno, extremely interesting! and at the same time very scary... Do you have suggestions how to prevent fraud or - if the #@$% has hit the fan - how to cope with this? I'm looking forward to your new articles on this subject!