NVIDIA DLSS: Control and Beyond

By Andrew Edelsten, Technical Director of Deep Learning at NVIDIA on August 30, 2019 | Turing Featured Stories NVIDIA RTX DLSS

With the Turing architecture, we set out to change gaming with two big leaps in graphics: real-time ray tracing and AI super resolution, aka DLSS. Ray tracing provides the next generation of visual fidelity; DLSS boosts performance so you can enjoy that fidelity at higher frame rates.

Control, released this week by Remedy Entertainment and 505 Games, showcases breathtaking ray tracing effects. Delivering these effects, however, requires more computation per pixel than any previous method.

Achieving great frame rates in games like Control means new approaches to rendering. Our approach: super resolution -- rendering fewer, richer pixels and using advanced techniques to output at higher resolutions with great image quality.

Next Generation Games Need Super Resolution

Ray tracing, physics, animation, and higher resolution displays drive up GPU computing needs exponentially. Ray tracing alone can require many times the computing horsepower of traditional rasterization techniques. Dedicated ray tracing hardware, such as Turing’s RT Cores, are key to addressing these new computing demands.

But even RT Cores aren’t always enough to handle maximum-fidelity graphics at 4K+ resolutions. An additional approach is needed. One that focuses computing horsepower on fewer, more beautiful pixels, then uses super resolution techniques to enjoy these pixels at native display resolutions with pristine image quality.

An Image Processing Approach to Super Resolution

One of the core challenges of super resolution is preserving details in the image while also maintaining temporal stability from frame to frame. The sharper an image, the more likely you’ll see noise, shimmering, or temporal artifacts in motion.

During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.

This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimise the model's performance before bringing it to a shipping game.

Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

Let’s look at an example video, below. The left side uses Control’s in-engine scaling. The right side shows DLSS. Both sides are rendering at 720p, and outputting at 1080p. Notice how DLSS brings out more detail and improves temporal stability, reducing flickering and shimmering.

 

Video of In-engine Scaling (Left) vs. DLSS (Right)

While the image processing algorithm is a good solution for Control, the approximation falls short in handling certain types of motion. Let’s look at an example of native 1080p vs. 1080p DLSS in Control. Notice how the flames on the right are not as well defined as in native resolution.

 
 

Video of 1080p Native (Left) vs. DLSS (Right)

Clearly, there’s opportunity for further advancement.

Deep Learning: The Future of Super Resolution

Deep learning holds incredible promise for the next chapter of super resolution. Hand engineered algorithms are fast and can do a fair job of approximating AI. But they’re brittle and prone to failure in corner cases.

Take self-driving cars, for instance. For years, engineers tried to hand code every potential scenario an automobile could find itself in. But this approach failed given the massive number of variables. Even locating cars reliably in a video is a tough task since they can look different based on distance, lighting, colour, and shape. Self-driving cars needed AI to progress.

While self-driving cars operate in a complex physical world, games run in large virtual worlds. Accurately anticipating the colour and motion of every pixel across many games is too difficult to code by hand. Rather than solve the problem via infinite conditional statements, deep learning learns the algorithm from the data.

Deep learning-based super resolution learns from tens of thousands of beautifully rendered sequences of images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Deep neural networks are then trained to recognise what beautiful images look like. Then these networks reconstruct them from lower-resolution, lower sample count images. The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.

There are many other examples of how deep learning is used to create super resolution images and video, create new frames of video, or transfer an artist’s style from one image to the next. Before Turing, none of this was possible in real-time. With Turing’s Tensor Cores, 110 teraflops of dedicated horsepower can be applied for real-time deep learning.

Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.

 

Video of Image Processing Algorithm (Left) vs. AI Research Model (Right)

With further optimisation, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.

More Innovation To Come

The increasing computing demands of next generation, ray-traced content requires clever approaches, such as super resolution, to deliver great frame rates. The new DLSS techniques available in Control are our best yet. We’re also continuing to invest heavily in AI super resolution to deliver the next level of image quality.

Our next step is optimising our AI research model to run at higher FPS. Turing’s 110 Tensor teraflops are ready and waiting for this next round of innovation. When it arrives, we’ll deploy the latest enhancements to gamers via our Game Ready Drivers.

Thanks for your feedback to date as we work to break new ground in computer graphics with DLSS and AI super resolution. Try out Control, and let us know what you think on our feedback page. The game looks amazing and we believe you’ll love playing it.