Meet VALL-E, Microsoft’s AI that can mimic anyone’s voice in just 3 seconds

First there was the sentient—albeit, fictional—compactor robot WALL-E, whose name would go on to inspire the AI image generator DALL-E. Well, there’s now a new AI in town: VALL-E, Microsoft’s AI-powered neural codec language model that is scarily good at synthesising human voices.

Revealed earlier this week by Microsoft researchers, VALL-E was built off a previous AI technology introduced by Meta called EnCodec. VALL-E works pretty differently to your regular text-to-speech tools though; while the text-to-speech tools out there today typically work by manipulating waveforms to create “speech”, VALL-E can generate actual audio codec codes from both text and acoustic prompts. Basically, you can let VALL-E first listen to a sample of a person talking (it only needs to be at least three seconds long), and it’ll then analyse the way their voice sounds and breaks it down into what the researchers are calling ‘acoustic tokens’.

Using these acoustic tokens, you can give VALL-E a text prompt, in which VALL-E will then be able to generate an audio clip that both says the prompt while keeping the speaker’s vocal patterns, as well as closely imitate the acoustic environment of the sample audio and even generate variations of the sample voice by tweaking with the prompts used when generating the result.

You can check out some sample audio of VALL-E at work below:

According to the researchers, VALL-E could one day be used for text-to-speech applications much better than the ones available out there today. It could also be used for audio content creation by pairing it with other AI tools such as the human chat AI model GPT-3. There’s potential for it to be used for speech editing too, using VALL-E to tweak recordings of a person’s speech or conversation. Thankfully though, Microsoft is seeming not opening it up to the public to mess about with for now, which is probably a good thing as people can easily abuse VALL-E for more harmful reasons.

The researchers also added that they’re looking at maybe building a detection model that can tell whether an audio clip is real or a VALL-E generation:

“Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models,” – Microsoft VALL-E researchers

If you’re interested to know more about VALL-E, you can check out its demo page on Github. The researchers involved have not only provided more details about VALL-E there, but also a bunch of different VALL-E samples that you can play and listen to yourself.

Recent Posts

Puspakom backs officer as motorcycle trader ordered to pay RM80K over TikTok Live defamation

Puspakom Sdn Bhd (Puspakom) has reaffirmed its commitment to integrity and professional conduct following a…

14 hours ago

Huawei FusionSolar9.0 launches in Malaysia with AI-driven, grid-stabilising solar and energy storage solution

Huawei has launched its FusionSolar9.0 Smart PV & ESS solution in Malaysia, marking a shift…

14 hours ago

Hyundai Ioniq 6 N and Ioniq 5 N estimated price in Malaysia starts from RM450k

Hyundai Motor Malaysia (HMY) has officially opened the order books for its upcoming high-performance EV…

2 days ago

WCE now supports TNG eWallet PayDirect at all toll plazas

West Coast Expressway (WCE) is now PayDirect enabled and it is said to be the…

2 days ago

JomCharge x DBKL offers 50% off EV charging in Kepong this weekend

For this coming Labour Day holiday weekend, JomCharge x DBKL are offering 50% discount for…

2 days ago

Volvo offers Selekt certified used EVs from as little as RM153,000

Volvo Car Malaysia has released a limited batch of 100 Volvo Selekt Certified Used Cars…

2 days ago

This website uses cookies.