NVIDIA Omniverse Audio2Face


Omniverse Audio2Face

Quickly and easily generate expressive facial animation from just an audio source with NVIDIA’s Deep Learning AI technology.

Facial Animation Made Easy

Omniverse Audio2Face beta simplifies animation of a 3D character to match any voice-over track, whether you’re animating characters for a game, film, real-time digital assistants, or just for fun. You can use the app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.

How It Works

Omniverse Audio2Face App is based on an original NVIDIA Research paper. Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple. Just select your audio and upload it into the app.  The technology feeds the audio input into a pre-trained Deep Neural Network and the output of the network drives the facial animation of your character in real-time.  Users have the option to edit various post-processing parameters to edit the performance of the character. The output of the network then drives the 3D vertices of your character mesh to create the facial animation. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

(Note:  Direct blendshape support will be released at a later date)

Omniverse Audio2Face App
Audio2Face generates facial animation through audio input.


Use a Recording, or Animate Live

Simply record a voice audio track, input into the app, and see your 3D face come alive. You can even generate facial animations live through a microphone.

Audio2Face will be able to process any language easily. And we’re continually updating with more and more languages.  


Face-swap in an Instant

Audio2Face lets you retarget to any 3D human or human-esque face, whether realistic or stylized. Watch this test as we retarget from Digital Mark to a Rhino!

Animate any character face with Audio2Face
Use multiple instances to generate facial animation for more than one character


Solo Act or a Choir

It’s easy to run multiple instances of Audio2Face with as many characters in a scene as you like - all animated from the same, or different audio tracks. Breathe life and sound into a  dialogue between a duo, a sing-off between a trio, an in-sync quartet - and beyond.


Connect and Convert

The latest update to Omniverse Audio2Face now enables blendshape conversion and also blendweight export options. Plus, the app now supports export-import with Epic Games Unreal Engine 4 to generate motion on MetaHuman characters using the Omniverse Unreal Engine 4 Connector.

Add emotions to your animation.
Add emotions to your animation.


Bring the Drama

Audio2Face gives you the ability to choose and animate your character’s emotions in the wink of an eye. The network automatically manipulates the face, eyes, mouth, tongue, and head motion to match your selected emotional range and customized level of intensity.

Feature coming soon.

See Audio2Face in Action

Creatures and Aliens

Drive facial animation of fantastical creatures and aliens. Here we have Digital Mark driving the performance of the Alien.

Misty the Animated Chatbot

Presented at GTC Spring 2020, Misty is an interactive weather bot that is driven by Audio2Face in run-time. We demonstrated retargeting from a realistic human mesh to a stylized character mesh to be used as an interactive service agent.

Omniverse Machinima

Unveiled during the GeForce 30 series launch, Audio2Face is seen in the Omniverse Machinima demo. Facial animation is notoriously complex and cost-prohibitive. Audio2Face automates detailed facial animation to democratize the 3D content creation process.   

Omniverse for Creators

Artists, animators, and designers can experience faster, frictionless workflows by using multiple 3D applications simultaneously within Omniverse, rendering scenes in real-time and at full fidelity, without ever having to import or export between applications.


Omniverse Machinima works on any NVIDIA RTX device—from the laptop to the data center. Download Omniverse to begin.

Download NVIDIA Omniverse


Download NVIDIA Omniverse and run the installation.

Open the Omniverse Launcher


Once installed, open the Omniverse launcher.

Find Audio2Face in the Apps section and click Install


Find Audio2Face in the Apps section and click Install, then Launch.

Get a Deeper Look at Audio2Face

Watch this technical overview presented at GTC 2020.

System Requirements
Element Minimum Specifications
OS Supported Windows 10 64-bit, 1909 and higher
CPU Intel I7, AMD Ryzen
2.5GHz or greater
CPU Cores 4 or higher
RAM 16 GB or higher
Storage 500 Gb SSD or higher
VRAM 6 GB or higher
Min. Video Driver Version 455.28 (Linux), 456.71 (Windows)
Note: Omniverse is built to run on any RTX-powered machine. For ideal performance, we recommend using GeForce RTX 2080, Quadro RTX 5000, or higher. For latest drivers, visit here.


Enter the Omniverse

Deep dive into the platform through a series of GTC webinars hosted by Omniverse experts in various industries.

More Omniverse Apps

Omniverse Apps are purpose-built to enhance specific workflows. See them in action.



Collaboratively review design projects with this powerful, physically-accurate and photorealistic visualization tool.



Accelerate advanced world building and interactively assemble, simulate, and render scenes in Pixar USD in real-time.



Simplify and accelerate 3D deep learning research using NVIDIA’s Kaolin PyTorch library with this powerful visualization tool.



Remix, recreate, and redefine animated video game storytelling with an AI-powered toolkit for gamers.

Isaac Sim

Isaac Sim

Import, build, and test robots in a photorealistic and high-fidelity physics 3D environment.

Drive Sim


Leverage a simulation experience for autonomous vehicle development that is virtually indistinguishable from reality.