Topaz Labs Simplifies AI Deployment With NVIDIA TensorRT

Gaming

Topaz Labs Simplifies AI Deployment and Accelerates Video Processing With NVIDIA

Objective

Topaz Labs, a leader in AI-powered photo and video enhancement tools, empowers creators to upscale, sharpen, and denoise media with precision. However, deploying high-performance AI inference across millions of diverse PC configurations can create significant operational overhead and bloated application install sizes.

By leveraging NVIDIA® TensorRT™ for RTX™, Topaz Labs shifted to on-device engine generation, reducing installer size and delivering faster processing speeds for users.

Customer

Topaz Labs

Topic

Content Creation / Rendering

Key Takeaways

Faster Video Processing

  • More than 20% speedup in processing time to upscale 1,620 frames of video from 1080p to 4K on NVIDIA GeForce RTX™ 5090 GPU.

Streamlined AI Deployment

  • Deployed to millions of PCs with diverse hardware configurations with a single package using on-device engine generation. In the past, this required storing and shipping hundreds of precompiled static engines.

Reduced Application Footprint

  • By eliminating the need to store static inference engines, Topaz Labs significantly decreased the engine storage size by 3-4x while delivering optimized performance.

When Static Engines Became a Bottleneck

Topaz Labs faced a complex hurdle in delivering consistent, high-performance AI inference to a user base operating millions of PCs with vastly different hardware specifications. Using NVIDIA TensorRT for RTX, the company’s traditional inference solution, the team had to pre-generate and ship large, static inference engines for multiple GPU types to ensure its AI models ran effectively on various architectures.

This approach created significant operational overhead. Managing thousands of model files across diverse hardware configurations required weeks of precompilation work whenever the team needed to release updates or support new GPU generations. The process increased the complexity of deployment and bloated the software’s install size, making downloads and updates cumbersome for users.

Topaz Labs needed a way to maintain optimal performance across the diverse landscape of GeForce RTX 20, 30, 40, and 50 series GPUs without the logistical development and deployment burden of managing dozens of static engines.

Accelerating Workflows and Reducing Complexity

The implementation of NVIDIA TensorRT for RTX delivered immediate improvements in both application efficiency and user productivity:

  • Significant Speedups: Users experienced a 15%–20% reduction in model run times. In benchmark tests using an NVIDIA GeForce RTX 5090 GPU and AMD Ryzen 9 9950X CPU, processing 1,620 frames to upscale from 1080p to 4K dropped from 4 hours and 9 minutes to just 3 hours and 30 minutes.
  • Operational Efficiency: Topaz Labs successfully eliminated the need to ship dozens of static engines. This reduction in install size makes the software lighter and easier to distribute while simplifying the development pipeline.

Scalable Performance: The solution enables Topaz Labs to ship more advanced AI features to its massive install base without inflating the application size, ensuring consistent performance improvements for millions of RTX users.

Building the Next Wave of Creative Tools

With TensorRT for RTX now integrated into Topaz Video, the development team can ship new AI-powered features faster and more efficiently than ever before. The simplified deployment process—eliminating the need to precompile thousands of static engines—allows Topaz Labs to bring more creative tools to photographers and video editors without increasing download sizes or slowing release cycles.

As NVIDIA continues to release new RTX GPU generations, Topaz Labs is positioned to support the latest hardware on day one, ensuring that creators across RTX 20, 30, 40, and 50 series GPUs can take advantage of performance improvements immediately. The collaboration also opens the door for Topaz Labs to explore more advanced AI models, from enhanced diffusion-based restoration to real-time video enhancements, with the confidence that TensorRT for RTX can scale to meet performance and deployment requirements.

Both Topaz Labs and NVIDIA are committed to further optimizing their offerings, bringing in faster load times, better performance, and support for newer models with upcoming releases for creators leveraging the power of RTX GPUs and advanced AI inference.

"Our team’s successful integration of TensorRT for RTX demonstrates the library’s effectiveness in providing substantial inference acceleration with minimal integration efforts. This collaboration not only enhances the current capabilities of Topaz Video but also paves the way for exciting future developments. We sincerely appreciate NVIDIA’s efforts in this direction of delivering performance with portable inference with libraries like TensorRT for RTX."

Dr. Suraj Raghuraman
Head of AI Engine, Topaz Labs

NVIDIA TensorRT for RTX empowers developers to deliver high-performance AI on Windows PCs with ease.

Related Customer Stories