NVIDIA Omniverse Audio2Face

Beta

Omniverse Audio2Face

Instantly create expressive facial animation from just an audio source using generative AI.

Audio-to-Animation Made Easy With Generative AI

Omniverse Audio2Face beta is a reference application that simplifies animation of a 3D character to match any voice-over track, whether you’re animating characters for a game, film, real-time digital assistants, or just for fun. You can use the app for interactive real-time applications or as a traditional facial animation authoring tool. Run the results live or bake them out, it’s up to you.

How It Works

Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple—just select your audio and upload. The audio input is then fed into a pre-trained Deep Neural Network and the output drives the 3D vertices of your character mesh to create the facial animation in real-time. You also have the option to edit various post-processing parameters to edit the performance of your character. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

Omniverse Audio2Face App
 
Audio2Face generates facial animation through audio input.

Audio Input

Use a Recording, or Animate Live

Simply record a voice audio track, input into the app, and see your 3D face come alive. You can even generate facial animations live using a microphone.

Audio2Face will be able to process any language easily. And we’re continually updating with more and more languages.  

Character Transfer

Face-Swap in an Instant

Audio2Face lets you retarget to any 3D human or human-esque face, whether realistic or stylized. This makes swapping characters on the fly—whether human or animal—take just a few clicks.

 
Animate any character face with Audio2Face
 
Use multiple instances to generate facial animation for more than one character

Scale Output

Express Yourself—or Everyone at Once

It’s easy to run multiple instances of Audio2Face with as many characters in a scene as you like - all animated from the same, or different audio tracks. Breathe life and sound into dialogue between a duo, a sing-off between a trio, an in-sync quartet, and beyond. Plus, you can dial up or down the level of facial expression on each face and batch output multiple animation files from multiple audio sources.

Emotion Control

Bring the Drama

Audio2Face gives you the ability to choose and animate your character’s emotions in the wink of an eye. The AI network automatically manipulates the face, eyes, mouth, tongue, and head motion to match your selected emotional range and customized level of intensity, or, automatically infers emotion directly from the audio clip.

 
Add emotions to your animation.
 
Add emotions to your animation.

Data Conversion

Connect and Convert

The latest update to Omniverse Audio2Face now enables blendshape conversion and also blendweight export options. Plus, the app now supports export-import with Blendshapes for Blender and Epic Games Unreal Engine to generate motion for characters using their respective Omniverse Connectors.

Announcing NVIDIA Omniverse Avatar Cloud Engine Early Access

Join the program and experience how Omniverse Avatar Cloud Engine (ACE) eases avatar development, delivering all the AI building blocks necessary to create, customize and deploy interactive avatars.

See Audio2Face in Action

Creatures and Aliens

Drive facial animation of fantastical creatures and aliens. Here we have Digital Mark driving the performance of the Alien.

Misty the Animated Chatbot

Presented at GTC Spring 2020, Misty is an interactive weather bot that is driven by Audio2Face in run-time. We demonstrated retargeting from a realistic human mesh to a stylized character mesh to be used as an interactive service agent.

Omniverse Machinima

Unveiled during the GeForce 30 series launch, Audio2Face is seen in the Omniverse Machinima demo. Facial animation is notoriously complex and cost-prohibitive. Audio2Face automates detailed facial animation to democratize the 3D content creation process.   

NVIDIA Omniverse is helping me achieve more natural results for my digital humans and speeding up my workflow so that I can spend more time on the creative process.

— Anderson Rohr, 3D Artist

Experience NVIDIA Omniverse Today

Find the right license to fit your 3D workflow and start exploring Omniverse right away.

nvidia-omniverse-creators-3c33-D

Creators

Free to Download and Create

Omniverse acts as a central hub to seamlessly connect and enhance 3D creative applications, unifying assets, libraries, and tools for a truly uninterrupted workflow, letting artists achieve new heights of creative freedom.

Developers

Developers

Free to Develop and Distribute

Omniverse is built with developers in mind and gives them the ability to customize their 3D workflows at every layer to easily build new Omniverse Connectors, extensions, applications, and microservices.

nvidia-omniverse-enterprise-3c33-D

Enterprise

Free Trial I Annual License

Omniverse Enterprise transforms complex design workflows for organizations of any scale. It enables real-time collaboration with multiple users and locations, using multiple applications using centralized project data.

How to Install Omniverse Audio2Face

Download NVIDIA Omniverse

Step 1

Download NVIDIA Omniverse and run the installation.

Open the Omniverse Launcher

Step 2

Once installed, open the Omniverse launcher.

Find Audio2Face in the Apps section and click Install

Step 3

Find Omniverse Audio2Face in the Apps section and click Install, then Launch.

System Requirements
Element Minimum Specifications Recommended
OS Supported Windows 10 (Version 1903 and above) Windows 10 (Version 1903 and above)
CPU Intel Core i5 10th Series
AMD Ryzen 5 5th Series
Intel Core i7 13th Series
AMD Ryzen 7 7th Series
RAM 16 GB 32 GB
Storage 250 GB SSD 500 GB SSD
GPU Any RTX GPU with 8 GB GeForce RTX 4070 Ti, NVIDIA RTX A4500 or higher
Min. Video Driver Version See the latest drivers here See the latest drivers here

Dive into Step-by-Step Tutorials

Discover More Omniverse Apps

Discover the full suite of NVIDIA Omniverse Apps and Connectors, or try out our recommendations below.

Create

Accelerate advanced world-building with Pixar USD and interactively assemble, simulate, and render scenes in real time.

Code

An integrated development environment for developers and power users to easily build Omniverse extensions, apps, or microservices.

Machinima

Remix, recreate, and redefine animated video game storytelling with an AI-powered toolkit for creators.

View

Collaboratively review design projects with this powerful, physically accurate, and photorealistic visualization tool.

Become Part of Our Community

Access Tutorials

Take advantage of hundreds of free tutorials, sessions, or our beginner’s training to get started with USD

Become an Omnivore

Join our community! Attend our weekly live streams on Twitch and connect with us on Discord and our forums.

Get Technical Support

Having trouble? Post your questions in the forums for quick guidance from Omniverse experts, or refer to the platform documentation.

Showcase Your Work

Created an Omniverse masterpiece? Submit it to the Omniverse Gallery, where you can get inspired and inspire others.

Catch all the inspiring GTC sessions on-demand.

The Developer Conference for the Era of AI and the Metaverse

Catch all the inspiring GTC sessions on-demand.

The Developer Conference for the Era of AI and the Metaverse

Connect your creative worlds to a universe of possibility with NVIDIA Omniverse.

  • An Artist's Omniverse: How to Build Large-Scale, Photoreal Virtual Worlds

    • Gabriele Leone, Senior Art Director, NVIDIA

    Hear from NVIDIA's expert environmental artists and see how 30 artists built an iconic multi-world demo in three months. Dive into a workflow featuring Adobe Substance 3D Painter, Photoshop, Autodesk 3ds Max, Maya, Blender, Modo, Maxon Zbrush, SideFX Houdini, and NVIDIA Omniverse Create, and see how the artists pulled off delivery of a massive scene that showcases the latest in NVIDIA RTX, AI, and physics technologies.

    View Details >

  • Next Evolution of Universal Scene Description (USD) for Building Virtual Worlds

    • Aaron Luk, Senior Engineering Manager, Omniverse, NVIDIA

    Universal Scene Description is more than just a file format. This open, powerful, easily extensible world composition framework has APIs for creating, editing, querying, rendering, simulating, and collaborating within virtual worlds. NVIDIA continues to invest in helping evolve USD for workflows beyond Media & Entertainment—to enable the industrial metaverse and the next wave of AIs. Join this session to see why we are "all on" in USD, our USD development roadmap, and learn about our recent projects and initiatives at NVIDIA and with our ecosystem of partners.

    View Details >

  • Foundations of the Metaverse: The HTML for 3D Virtual Worlds

    • Michael Kass, Senior Distinguished Engineer, NVIDIA
    • Rev Lebaredian, VP Simulation Technology and Omniverse Engineering, NVIDIA
    • Guido Quaroni, Senior Director of Engineering of 3D & Immersive, Adobe
    • Steve May, Vice President, CTO, Pixar
    • Mason Sheffield, Director of Creative Technology, Lowe’s Innovation Labs, Lowe's
    • Natalya Tatarchuk, Distinguished Technical Fellow and Chief Architect, Professional Artistry & Graphics Innovation, Unity
    • Matt Sivertson, Vice President and Chief Architect, Media & Entertainment, Autodesk
    • Mattias Wikenmalm, Senior Expert, Volvo Cars

    Join this session to hear from a panel of distinguished technical leaders as they talk about Universal Scene Description (USD) as a standard for the 3D evolution of the internet—the metaverse. These luminaries will discuss why they are investing in or adopting USD, and what technological advancements need to come next to see its true potential unlocked.

    View Details >

  • How to Build Simulation-Ready USD 3D Assets

    • Renato Gasoto, Robotics & AI Engineer, NVIDIA 
    • Beau Perschall, Director, Omniverse Sim Data Ops, NVIDIA

    The next wave of industries and AI requires us to build physically accurate virtual worlds indistinguishable from reality. Creating virtual worlds is hard, and today's existing universe of 3D assets is inadequate, representing just the visual representation of an object. Whether building digital twins or virtual worlds for training and testing autonomous vehicles or robots, 3D assets require many more technical properties, requiring a need to develop and adopt novel processes, techniques, and tools. NVIDIA is introducing a new class of 3D assets called "SimReady" assets—the building blocks of virtual worlds. SimReady assets are more than just 3D objects—they encompass accurate physical properties, behavior, and connected data streams built on Universal Scene Description (USD). We'll show you how to get started with SimReady USD assets, and present the tools and techniques required to develop and test these assets.

    View Details >

  • How Spatial Computing is Going Hyperscale

    • Omer Shapira, Senior Engineer, Omniverse, NVIDIA

    Recent advances in compute pipelines have enabled leaps in body-centered technology such as fully ray-traced virtual reality (VR). Simultaneously, network bottlenecks have decreased to the point that streaming pixels directly from datacenters to HMDs is a reality. Join this talk to explore the potential of body-centered computing at data center-scale—and what applications, experiences, and new science it enables.

    View Details >

Three headshots with varying dark gray to dark purple backgrounds. The left headshot features a man in a gray shirt with a gold banner that reads Stephen Jones, NVIDIA. The middle headshot features a woman in a red shirt with a gold banner that reads Anima Anandkumar, NVIDIA. The right headshot features a man in a black shirt and gray collar with a gold banner that reads Ian Buck, NVIDIA.

Connect With Us

Stay up-to-date on the latest NVIDIA Omniverse news.