NVIDIA Omniverse Audio2Face

Omniverse Audio2Face

Omniverse Audio2Face is an AI-powered application that generates expressive facial animation from just an audio source.

Easy Facial Animation through AI

Audio2Face simplifies animation of a 3D character to match any voice-over track, whether you’re animating characters for a game, film, real-time digital assistants, or just for fun. You can use the app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.

How It Works

Omniverse Audio2Face App is based on an original NVIDIA Research paper. Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple. Just select your audio and upload it into the app.  The technology feeds the audio input into a pre-trained Deep Neural Network and the output of the network drives the facial animation of your character in real-time.  Users have the option to edit various post-processing parameters to edit the performance of the character. The output of the network then drives the 3D vertices of your character mesh to create the facial animation. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

(Note:  Direct blendshape support will be released at a later date)

Omniverse Audio2Face App
 
Audio2Face generates facial animation through audio input.

AUDIO INPUT

Use a Recording, or Animate Live

Simply record a voice audio track, input into the app, and see your 3D face come alive. You can even generate facial animations live through a microphone.

Audio2Face will be able to process any language easily. And we’re continually updating with more and more languages. Check out these tests in English, French, Italian and Russian.

CHARACTER TRANSFER

Make Any Face Come to Life

Audio2Face lets you retarget to any 3D human or human-esque face,  whether realistic or stylized. Watch this test as we retarget from Digital Mark to Rain.

 
Omniverse View includes RTX Renderer
 
Use multiple instances to generate facial animation for more than one character.

MULTIPLE INSTANCES

Solo Act or a Choir

It’s easy to run multiple instances of Audio2Face with as many characters in a scene as you like - all animated from the same, or different audio tracks. Breathe life and sound into a  dialogue between a duo, a sing-off between a trio, an in-sync quartet - and beyond.

EMOTION CONTROL

Bring the Drama

Audio2Face gives you the ability to choose and animate your character’s emotions - the network automatically manipulates the face, eyes, mouth, tongue, and head motion to match your selected emotional range.

Feature coming soon.

Add emotions to your animation.

See Audio2Face in Action

Creatures and Aliens

Drive facial animation of fantastical creatures and aliens. Here we have Digital Mark driving the performance of the Alien.

Misty the Animated Chat Bot

Presented at GTC Spring 2020, Misty is an interactive weather bot that is driven by Audio2Face in run-time. We demonstrated retargeting from a realistic human mesh to a stylized character mesh to be used as an interactive service agent.

Omniverse Machinima

Unveiled during the GeForce 30 series launch, Audio2Face is seen in the Omniverse Machinima demo. Facial animation is notoriously complex and cost-prohibitive. Audio2Face automates detailed facial animation to democratize the 3D content creation process.   

GET STARTED WITH OMNIVERSE AUDIO2FACE

Omniverse Machinima works on any NVIDIA RTX device—from the laptop to the data center. Download Omniverse to begin.

Download NVIDIA Omniverse

1

Download NVIDIA Omniverse and run the installation.

Open the Omniverse Launcher

2

Once installed, open the Omniverse launcher.

Find Audio2Face in the Apps section and click Install

3

Find Audio2Face in the Apps section and click Install, then Launch.

Get a Deeper Look at Audio2Face

Watch this technical overview presented at GTC 2020.

Dive into the Latest Advancements in Virtual Collaboration and Simulation

See how Omniverse can transform your 3D character animation workflow in our top GTC sessions of 2021, now available on-demand.

System Requirements
Element Minimum Specifications
OS Supported Windows 10 64-bit, 1909 and higher
CPU Intel I7, AMD Ryzen
2.5GHz or greater
CPU Cores 4 or higher
RAM 16 GB or higher
Storage 500 Gb SSD or higher
GPU Any RTX GPU
VRAM 6 GB or higher
Min. Video Driver Version 455.28 (Linux), 456.71 (Windows)
Note: Omniverse is built to run on any RTX-powered machine. For ideal performance, we recommend using GeForce RTX 2080, Quadro RTX 5000, or higher. For latest drivers, visit here.

Support

Enter the Omniverse

Deep dive into the platform through a series of GTC webinars hosted by Omniverse experts in various industries.

More Omniverse Apps

Omniverse Apps are purpose-built to enhance specific workflows. See them in action.

View

View

Collaboratively review design projects with this powerful, physically-accurate and photorealistic visualization tool.

Create

Create

Accelerate advanced world building and interactively assemble, simulate, and render scenes in Pixar USD in real-time.

Kaolin

Kaolin

Simplify and accelerate 3D deep learning research using NVIDIA’s Kaolin PyTorch library with this powerful visualization tool.

Machinima

Machinima

Remix, recreate, and redefine animated video game storytelling with an AI-powered toolkit for gamers.

Isaac Sim

Isaac Sim

Import, build, and test robots in a photorealistic and high-fidelity physics 3D environment.

Drive Sim

DRIVE Sim

Leverage a simulation experience for autonomous vehicle development that is virtually indistinguishable from reality.