As technology progresses, computer-generated voices are getting more realistic every single day. I’ll explore how the technology evolved, its current uses, and how it could be misused in the future.
You might not realise it, but computer-generated voices are everywhere, from voice assistants like Siri to text-to-speech functions on apps like TikTok. With AI progressing really fast, computer-generated voices are now realistic, but are they becoming too realistic?
Computer-generated voices are now mostly used for accessibility. Text-to-speech on iOS’ VoiceOver feature lets the blind browse the Internet and use social media like Facebook and Twitter. It’s also used by the telecommunications industry. Every time you call a helpdesk and there’s a robot speaking, that’s the same technology going on. The most impressive use so far is definitely Google Duplex, which can call and book appointments for you with a computer voice that sounds eerily natural.
To understand how we got here, let’s go back a couple of decades to see how the technology developed.
The first case of synthesized speech can be tracked all the way back to 1779 when scientist Christian Gottlieb Kratzenstein built a model of a human vocal tract that could produce the five long vowel sounds.
Fast-forward to the electronic age, we get speech synthesiser chips like in the Speak & Spell toys that now can actually say words, even though it sounds quite robotic. And then we reach the digital age, where we get computer-generated pop stars like Hatsune Miku and insane celebrity deepfakes.
You might ask “How did these voices get so good?”. The answer lies in emotion. As humans, we don’t really speak perfectly. There are a lot of imperfections. We need to take breaths, or say uhm and ah. So the key to making computer speech sound realistic is by adding those imperfections in.
Sonantic is a company that specialises in making AI voices that are expressive. For example, Siri and TikTok only use words for the input, so the results are pretty flat. Sonantic, on the other hand, allows you to specify the emotion, pacing, and even the pitch contour of the lines. When you add in breaths and background music, it’s almost impossible to tell the difference between a computer and a human.
At this point, I think it’s clear that this technology is not something to be taken lightly. Even though they can be used for good, AI voices can be quite dangerous when put in the wrong hands.
Two years ago, scammers used AI to mimic a CEO’s voice and actually got about a million ringgit out of the company. And recently, people are mimicking celebrities without their permission too. In a documentary about the late chef Anthony Bourdain, the filmmaker used AI to make Anthony say words that he’s never said out loud before, which sparked a lot of discussions about the ethics of it all.
We can’t stop the technology from progressing, but what we can do is learn to adapt and develop policies to prevent the abuse of AI voices.
So, what do you think? Are you optimistic for the future of AI voices, or are you scared of what it might bring? Let us know in the comments section!