Surrealistic Joe Rogan AI Voice

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

Up until now, most synthetic voices sounded a little robotic, and notably fake. but this AI generated Voice of MMA-commentator-turned-podcaster Joe Rogan might be the most realistic AI voice clone you’ve ever heard.

Up until now, most synthetic voices sounded a little robotic, and notably fake. but this AI generated Voice of MMA-commentator-turned-podcaster Joe Rogan might be the most realistic AI voice clone you’ve ever heard.

Joe’s voice was created by artificial intelligence company Dessa. Dessa’s Real-talk machine learning specialists used Joe’s voice recordings from his podcast (already more than 1300 episodes online) to make tools to learn its voice patterns, prosody, and style.

“Right now, technical expertise, ingenuity, computing power and data are required to make models like RealTalk perform well,” says the company. “But in the next few years (or even sooner), we’ll see the technology advance to the point where only a few seconds of audio is needed to create a life-like replica of anyone’s voice on the planet.”

Technologies like this allow  Brands to create even more realistic AI assistants. But this only adds value to the customer experience if you extensively research how to speak and sound in order to influence the way your Brand is being perceived. It will drastically enhance recall, build trust and create preference, if you manage to translate your brand values and personality to audible associations.

Another beautiful opportunity of this technology can serve people speech disorders who use a computerized device to communicate. Up until recently, they had to choose between only a few voice options. That’s why Stephen Hawking has an American accent, and why many people end up with the same voice, often to incongruous effect. But these people will be able to create a unique voice when we see the technology advance to the point where only a few seconds of audio is needed to create a life-like replica of anyone’s voice on the planet.

But let’s not forgot that it’s kind of creepy too: Dessa already gave some examples of what might happen if the technology got into the wrong hands:

  • Spam callers impersonating your mother or spouse to obtain personal information
  • Impersonating someone for the purposes of bullying or harassment
  • Gaining entrance to high-security clearance areas by impersonating a government official
  • An ‘audio deep fake’ of a politician being used to manipulate election results or cause a social uprising

Subscribe To Our Newsletter

We will keep you updated about voice branding cases, events and tips.

More To Explore