CVPR 2021 Logo

NVIDIA at CVPR 2021

Connect with NVIDIA researchers at this year’s Computer Vision and Pattern Recognition (CVPR) online conference to learn more about the amazing work we're doing. Come see how NVIDIA Research collaborates with CVPR members to deliver AI breakthroughs across the community.

It all happens June 19–25, 2021.

Congratulations to the Contest Winners!

Rodolfo V.
Sophia A.
Andrew S.
Rahul D.

Niveditha K.
Andrew J.
Ishani V.
Grace L.

Thank you to all 706 participants who entered the drawing.

 

Mission AI Possible: NVIDIA Researchers Stealing the Show

Roll out of bed, fire up the laptop, turn on the webcam — and look picture-perfect in every video call, with the help of AI developed by NVIDIA researchers.

NVIDIA Vid2Vid Cameo, one of the deep learning models behind the NVIDIA Maxine SDK for video conferencing, uses generative adversarial networks (known as GANs) to synthesize realistic talking-head videos using a single 2D image of a person.

 

Trail Blazing our Future with AI and NVIDIA Research

In this chapter of I AM AI, we explore how NVIDIA Researchers across the world are bettering our world by doing their life’s work in AI research. Get a glimpse of the passion, dedication and curiosity that drives these brilliant minds and see the breadth of how AI is advancing humanity.

Schedule at a Glance

  • Presentations
  • Workshops and Tutorials

NVIDIA’s 28 accepted papers at this year’s online CVPR feature a range of groundbreaking research in the field of computer vision. From simulating dynamic gaming environments to powering coarse-to-fine neural architecture search for medical imaging, explore the work NVIDIA is bringing to the CVPR community.

Monday 6/21 Tuesday 6/22 Wednesday 6/23 Thursday 6/24 Friday 6/25
Neural Parts: Learning Expressive  Shape Abstractions with Invertible Neural Networks
Despoina Paschalidou, Angelos Katharopoulos, Andreas Geiger, Sanja Fidler 
6  - 8:30 a.m. EST | Paper
DeepTag: An Unsupervised Deep Learning Method for Motion Tracking on Cardiac Tagging Magnetic Resonance Images
Meng Ye, Mikael Kanski, Dong Yang, Qi Chang, Zhennan Yan, Qiaoying Huang, Leon Axel, Dimitris N. Metaxas  
6 - 8:30 a.m. EST | Paper
Neural Geometric Level of Detail: Real-Time Rendering with Implicit Shapes
Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles T Loop, Alec Jacobson, Derek Nowrouzezahrai, Morgan McGuire, Sanja Fidler  
6 - 8:30 a.m. EST | Paper
See Through Gradients: Image Batch Recovery via GradInversion
Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, Pavlo Molchanov  
6 - 8:30 a.m. EST | Paper
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner
Chong Yu 
6 - 8:30 a.m. EST | Paper
Self-Supervised Learning of Depth Inference for Multi-view Stereo
Jiayu Yang, Jose M. Alvarez, Miaomiao Liu
6 - 8:30 a.m. EST | Paper
Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement
Huiwen Luo, Koki Nagano, Han-Wei Kung, Qingguo Xu, Zejian Wang, Lingyu Wei, Liwen Hu, Hao Li 
6 - 8:30 a.m. EST | Paper
Self-Supervised Learning on Point Clouds by Learning Discrete Generative Models
Benjamin Eckart, Wentao Yuan, Chao Liu, Jan Kautz
6 - 8:30 a.m. EST | Paper
Optimal Quantization Using Scaled Codebook
Despoina Paschalidou, Angelos Katharopoulos, Andreas Geiger, Sanja Fidler 
6 - 8:30 a.m. EST | Paper
Semantic Segmentation with Generative Models: Semi-supervised Learning and Strong Out-of-Domain Generalization
Daiqing Li, Junlin Yang, Karsten Kreis, Antonio Torralba, Sanja Fidler
6 - 8:30 a.m. EST | Paper
Over-the-Air Adversarial Flickering Attacks against Recognition Networks
Roi Pony, Itay Naeh, Shie Mannor
11a.m. - 1:30 p.m EST  |  Paper 
Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets
Yuan-Hong Liao, Amlan Kar, Sanja Fidler 
11 a.m. - 1:30 p.m. EST | Paper
Learning Continuous Image Representation with Local Implicit Image Function
Yinbo Chen, Sifei Liu, Xiaolong Wang
11a.m. - 1:30 p.m. EST | Paper
Binary TTC: A Temporal Geofence for Autonomous Navigation
Abhishek badki, Orazio Gallo, Jan Kautz, Pradeep Sen
11 a.m. - 1:30 p.m. EST | Paper
RGB-D Local Implicit Function for Depth Completion of Transparent Objects
Luyang Zhu, Arsalan Mousavian, Yu Xiang, Hammad Mazhar, Jozef van Eenbergen, Shoubhik Debnath, Dieter Fox 
11a.m. - 1:30 p.m.. EST | Paper
Learning to Track Instances without Video Annotations
Yang Fu, Sifei Liu, Umar Iqbal, Shalini De Mello, Humphrey Shi, Jan Kautz
11a.m. - 1:30 p.m. EST | Paper
Deep Two-View Structure-from-Motion Revisited
Jianyuan Wang, Yiran Zhong, Yuchao Dai, Stan Birchfield, Kaihao Zhang, Nikolai Smolyanskiy, HONGDONG LI  
11 a.m. - 1:30 p.m. EST | Paper
DexYCB: A Benchmark for Capturing Hand Grasping of Objects
Yu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, Jan Kautz, Dieter Fox  
11 a.m. - 1:30 p.m. EST | Paper
DiNTS: Differentiable Neural Network Topology Search for Large-Scale Medical Image Segmentation
Yufan He, Dong Yang, Holger R Roth, Can Zhao, Daguang Xu 
10:00 p.m. – 12:30 a.m. EST | Paper
Synthesizing Long-Term Human Motion and Interaction in Scenes
Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, Xiaolong Wang 
11 a.m. - 1:30 p.m. EST | Paper
Weakly Supervised Learning of Rigid Scene Flow
Zan Gojcic, Or Litany, Andreas Wieser, Leonidas Guibas, Tolga Birdal 
10 p.m. - 12:30 a.m. EST | Paper
Weakly-Supervised Physically Unconstrained Gaze Estimation
Rakshit S Kothari, Shalini De Mello, Umar Iqbal, Wonmin Byeon, Seonwook Park, Jan Kautz
10 p.m. - 12:30 a.m. EST | Paper
IoUMatch: Leveraging IoU Prediction for Semi-Supervised Object Detection
He Wang, Yezhen Cong, Or Litany, Yue Gao, Leonidas Guibas 
10 p.m. - 12:30 a.m. EST | Paper
DriveGAN: Towards a Controllable High-Quality Neural Simulation
Seung Wook Kim, Jonah Philion, Antonio Torralba, Sanja Fidler 
10 p.m. - 12:30 a.m. EST | Paper
One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing
Ting-Chun Wang, Arun Mallya, Ming-Yu Liu 
10 p.m. - 12:30 a.m. EST | Paper
Semi-Supervised Hand-Object Poses Estimation with Interactions in Time
Shaowei Liu, Hanwen Jiang, Jiarui Xu, Sifei Liu, Xiaolong Wang 
10 p.m. - 12:30 a.m. EST | Paper
View Generalization for Single Image Textured Models
Anand Bhattad, Aysegul Dundar, Guilin Liu, Andrew Tao, Bryan Catanzaro 
10 p.m. - 12:30 a.m. EST | Paper
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort
Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois V Lafleche, Adela Barriuso, Antonio Torralba, Sanja Fidler
10 p.m. - 12:30 a.m. EST | Paper

Join NVIDIA at CVPR workshops and tutorials to engage with a diverse team of experts, expand your knowledge, and grow your skills with hands-on training.

Saturday 6/19 Monday 6/21 Tuesday 6/22 Wednesday 6/23 Thursday 6/24 Friday 6/25
LatinX in CV (LXCV) Research
8 a.m. - 7 p.m. EST | Workshop
5th AI City Challenge
Milind Naphade, Shuo Wang, Zheng Tang, David Anastasiu, Ming-Ching Chang, Pranamesh Chakraborty, Liang Zheng, Xiaodong Yang, Anuj Sharma, Stan Sclaroff, Rama Chellappa
8 a.m.- 2:00 pm. EST | Workshop

Deep Dive

Fast Track to Production AI with NVIDIA Pre-trained Models and TLT

Fast Track to Production AI with NVIDIA Pre-trained Models and TLT

NVIDIA pre-trained models and Transfer Learning Toolkit (TLT) significantly reduce the time and costs associated with large-scale data collection and labeling, and eliminates the burden of training AI/ML models from scratch. The latest release  includes new pre-trained models for pose estimation, people detection, and lots of computer vision use cases. It also supports training on AWS, GCP, and Azure cloud, as well as out-of-box deployment on NVIDIA Triton, DeepStream SDK, and Jarvis.

NVIDIA Isaac Sim on Omniverse

NVIDIA Isaac Sim on Omniverse

Discover the only robotics simulation application and synthetic data generation tool that lets you develop, test, and manage robots in photorealistic, physically-accurate virtual worlds with unbeatable scalability. Next-gen NVIDIA Isaac Sim is built on NVIDIA Omniverse, accelerating robotics development with the powerful capabilities of the virtual simulation and collaboration platform.

Omniverse Kaolin accelerates 3D deep learning research

Accelerate 3D Deep Learning Research with Kaolin and Omniverse Kaolin App

Omniverse Kaolin app is a powerful visualization tool that simplifies and accelerates 3D deep learning research using NVIDIA’s Kaolin PyTorch library. The Kaolin app leverages the Omniverse platform, USD format and RTX rendering to provide interactive tools that allow visualizing 3D outputs of any deep learning model as it is training, inspecting 3D datasets to find inconsistencies and gain intuition, and rendering large synthetic datasets from collections of 3D data. This lets you reduce the time needed to develop AI research for a wide range of 3D applications.

NVIDIA Applied Research Acceleration Program

NVIDIA Applied Research Accelerator Program

This program supports research projects that have the potential to make a real-world impact through deployment into GPU-accelerated applications adopted by commercial and government organizations. It’s designed to accelerate development and adoption by providing access to technical guidance, hardware, and funding based on project requirements, maturity, and impact.

DRIVE Developer Days now available On-Demand

DRIVE Developer Days now available On-Demand

The annual DRIVE Developer Days was held during GTC 2021, featuring a series of specialized sessions on autonomous vehicle hardware and software, including perception, mapping, simulation and more, all led by NVIDIA experts. These sessions are now available to view anytime with NVIDIA On-Demand.

NVIDIA Developer Program

NVIDIA Developer Program

Get the advanced tools and training you need to successfully build applications on all NVIDIA technology platforms.

NVIDIA RESEARCH
AI PLAYGROUND

Discover our most recent AI research and the new capabilities deep learning brings to visual and audio applications. Explore the latest innovations and see how you can bring them into your own work. We'll update this page frequently with new demos and tools.

Inception Partners

LIKE NO PLACE
YOU’VE EVER WORKED

You’ll solve some of the world’s hardest problems and discover never-before-seen ways to improve the quality of life for people everywhere. From healthcare to robots, self-driving cars to blockbuster movies. And a growing list of new opportunities every single day. Explore all of our open roles, including internships and new college graduate positions.

Learn more about our career opportunities by exploring current job openings as well as university jobs.

Sign up to receive the latest news from NVIDIA.