NVIDIA Omniverse Audio2Face

NVIDIA Audio2Face

Instantly create expressive facial animation from just an audio source using generative AI.

Audio-to-Animation Made Easy With Generative AI

NVIDIA Audio2Face beta is a foundation application for animating 3D characters facial characteristics to match any voice-over track, whether for a game, film, real-time digital assistant, or just for fun. You can use the Universal Scene Description (OpenUSD)-based app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.

How It Works

Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple—just select your audio and upload. The audio input is then fed into a pre-trained Deep Neural Network and the output drives the 3D vertices of your character mesh to create the facial animation in real-time. You also have the option to edit various post-processing parameters to edit the performance of your character. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

Omniverse Audio2Face App
 
Audio2Face generates facial animation through audio input.

Audio Input

Use a Recording, or Animate Live

Simply record a voice audio track, input into the app, and see your 3D face come alive. You can even generate facial animations live using a microphone.

Audio2Face will be able to process any language easily. And we’re continually updating with more and more languages.  

Character Transfer

Face-Swap in an Instant

Audio2Face lets you retarget to any 3D human or human-esque face, whether realistic or stylized. This makes swapping characters on the fly—whether human or animal—take just a few clicks.

 
Animate any character face with Audio2Face
 
Use multiple instances to generate facial animation for more than one character

Scale Output

Express Yourself—or Everyone at Once

It’s easy to run multiple instances of Audio2Face with as many characters in a scene as you like - all animated from the same, or different audio tracks. Breathe life and sound into dialogue between a duo, a sing-off between a trio, an in-sync quartet, and beyond. Plus, you can dial up or down the level of facial expression on each face and batch output multiple animation files from multiple audio sources.

Emotion Control

Bring the Drama

Audio2Face gives you the ability to choose and animate your character’s emotions in the wink of an eye. The AI network automatically manipulates the face, eyes, mouth, tongue, and head motion to match your selected emotional range and customized level of intensity, or, automatically infers emotion directly from the audio clip.

 
Add emotions to your animation.
 
Add emotions to your animation.

Data Conversion

Connect and Convert

The latest update to Audio2Face now enables blendshape conversion and also blendweight export options. Plus, the app now supports export-import with Blendshapes for Blender and Epic Games Unreal Engine to generate motion for characters using their respective Omniverse Connectors.

Dive into Step-by-Step Tutorials

Announcements

NVIDIA Drives Next Wave of Digital Humans With New Generative AI Microservices

NVIDIA ACE is now generally available for cloud, in early access for RTX AI PCs, in use by companies across customer service, gaming and healthcare.

Download NVIDIA Audio2Face Authoring Application

Frequently Asked Questions

Become Part of Our Community

Access Tutorials

Take advantage of hundreds of free tutorials, sessions, or our beginner’s training to get started with USD

Become an Omnivore

Join our community! Attend our weekly live streams on Twitch and connect with us on Discord and our forums.

Get Technical Support

Having trouble? Post your questions in the forums for quick guidance from Omniverse experts, or refer to the platform documentation.

Showcase Your Work

Created an Omniverse masterpiece? Submit it to the Omniverse Gallery, where you can get inspired and inspire others.

The Design and Simulation Conference for the Era of AI and the Metaverse

The Design and Simulation Conference
for the Era of AI and the Metaverse

Connect your creative worlds to a universe of possibility with NVIDIA Omniverse.

  • An Artist's Omniverse: How to Build Large-Scale, Photoreal Virtual Worlds

    • Gabriele Leone, Senior Art Director, NVIDIA

    Hear from NVIDIA's expert environmental artists and see how 30 artists built an iconic multi-world demo in three months. Dive into a workflow featuring Adobe Substance 3D Painter, Photoshop, Autodesk 3ds Max, Maya, Blender, Modo, Maxon Zbrush, SideFX Houdini, and NVIDIA Omniverse Create, and see how the artists pulled off delivery of a massive scene that showcases the latest in NVIDIA RTX, AI, and physics technologies.

    View Details >

  • Next Evolution of Universal Scene Description (USD) for Building Virtual Worlds

    • Aaron Luk, Senior Engineering Manager, Omniverse, NVIDIA

    Universal Scene Description is more than just a file format. This open, powerful, easily extensible world composition framework has APIs for creating, editing, querying, rendering, simulating, and collaborating within virtual worlds. NVIDIA continues to invest in helping evolve USD for workflows beyond Media & Entertainment—to enable the industrial metaverse and the next wave of AIs. Join this session to see why we are "all on" in USD, our USD development roadmap, and learn about our recent projects and initiatives at NVIDIA and with our ecosystem of partners.

    View Details >

  • Foundations of the Metaverse: The HTML for 3D Virtual Worlds

    • Michael Kass, Senior Distinguished Engineer, NVIDIA
    • Rev Lebaredian, VP Simulation Technology and Omniverse Engineering, NVIDIA
    • Guido Quaroni, Senior Director of Engineering of 3D & Immersive, Adobe
    • Steve May, Vice President, CTO, Pixar
    • Mason Sheffield, Director of Creative Technology, Lowe’s Innovation Labs, Lowe's
    • Natalya Tatarchuk, Distinguished Technical Fellow and Chief Architect, Professional Artistry & Graphics Innovation, Unity
    • Matt Sivertson, Vice President and Chief Architect, Media & Entertainment, Autodesk
    • Mattias Wikenmalm, Senior Expert, Volvo Cars

    Join this session to hear from a panel of distinguished technical leaders as they talk about Universal Scene Description (USD) as a standard for the 3D evolution of the internet—the metaverse. These luminaries will discuss why they are investing in or adopting USD, and what technological advancements need to come next to see its true potential unlocked.

    View Details >

  • How to Build Simulation-Ready USD 3D Assets

    • Renato Gasoto, Robotics & AI Engineer, NVIDIA 
    • Beau Perschall, Director, Omniverse Sim Data Ops, NVIDIA

    The next wave of industries and AI requires us to build physically accurate virtual worlds indistinguishable from reality. Creating virtual worlds is hard, and today's existing universe of 3D assets is inadequate, representing just the visual representation of an object. Whether building digital twins or virtual worlds for training and testing autonomous vehicles or robots, 3D assets require many more technical properties, requiring a need to develop and adopt novel processes, techniques, and tools. NVIDIA is introducing a new class of 3D assets called "SimReady" assets—the building blocks of virtual worlds. SimReady assets are more than just 3D objects—they encompass accurate physical properties, behavior, and connected data streams built on Universal Scene Description (USD). We'll show you how to get started with SimReady USD assets, and present the tools and techniques required to develop and test these assets.

    View Details >

  • How Spatial Computing is Going Hyperscale

    • Omer Shapira, Senior Engineer, Omniverse, NVIDIA

    Recent advances in compute pipelines have enabled leaps in body-centered technology such as fully ray-traced virtual reality (VR). Simultaneously, network bottlenecks have decreased to the point that streaming pixels directly from datacenters to HMDs is a reality. Join this talk to explore the potential of body-centered computing at data center-scale—and what applications, experiences, and new science it enables.

    View Details >

Three headshots with varying dark gray to dark purple backgrounds. The left headshot features a man in a gray shirt with a gold banner that reads Stephen Jones, NVIDIA. The middle headshot features a woman in a red shirt with a gold banner that reads Anima Anandkumar, NVIDIA. The right headshot features a man in a black shirt and gray collar with a gold banner that reads Ian Buck, NVIDIA.

Connect With Us

Stay up-to-date on the latest NVIDIA Omniverse news.