Gaming
Topaz Labs, a leader in AI-powered photo and video enhancement tools, empowers creators to upscale, sharpen, and denoise media with precision. However, deploying high-performance AI inference across millions of diverse PC configurations can create significant operational overhead and bloated application install sizes.
By leveraging NVIDIA® TensorRT™ for RTX™, Topaz Labs shifted to on-device engine generation, reducing installer size and delivering faster processing speeds for users.
Topaz Labs
Content Creation / Rendering
Faster Video Processing
Streamlined AI Deployment
Reduced Application Footprint
Topaz Labs faced a complex hurdle in delivering consistent, high-performance AI inference to a user base operating millions of PCs with vastly different hardware specifications. Using NVIDIA TensorRT for RTX, the company’s traditional inference solution, the team had to pre-generate and ship large, static inference engines for multiple GPU types to ensure its AI models ran effectively on various architectures.
This approach created significant operational overhead. Managing thousands of model files across diverse hardware configurations required weeks of precompilation work whenever the team needed to release updates or support new GPU generations. The process increased the complexity of deployment and bloated the software’s install size, making downloads and updates cumbersome for users.
Topaz Labs needed a way to maintain optimal performance across the diverse landscape of GeForce RTX 20, 30, 40, and 50 series GPUs without the logistical development and deployment burden of managing dozens of static engines.
The implementation of NVIDIA TensorRT for RTX delivered immediate improvements in both application efficiency and user productivity:
Scalable Performance: The solution enables Topaz Labs to ship more advanced AI features to its massive install base without inflating the application size, ensuring consistent performance improvements for millions of RTX users.
With TensorRT for RTX now integrated into Topaz Video, the development team can ship new AI-powered features faster and more efficiently than ever before. The simplified deployment process—eliminating the need to precompile thousands of static engines—allows Topaz Labs to bring more creative tools to photographers and video editors without increasing download sizes or slowing release cycles.
As NVIDIA continues to release new RTX GPU generations, Topaz Labs is positioned to support the latest hardware on day one, ensuring that creators across RTX 20, 30, 40, and 50 series GPUs can take advantage of performance improvements immediately. The collaboration also opens the door for Topaz Labs to explore more advanced AI models, from enhanced diffusion-based restoration to real-time video enhancements, with the confidence that TensorRT for RTX can scale to meet performance and deployment requirements.
Both Topaz Labs and NVIDIA are committed to further optimizing their offerings, bringing in faster load times, better performance, and support for newer models with upcoming releases for creators leveraging the power of RTX GPUs and advanced AI inference.
"Our team’s successful integration of TensorRT for RTX demonstrates the library’s effectiveness in providing substantial inference acceleration with minimal integration efforts. This collaboration not only enhances the current capabilities of Topaz Video but also paves the way for exciting future developments. We sincerely appreciate NVIDIA’s efforts in this direction of delivering performance with portable inference with libraries like TensorRT for RTX."
Dr. Suraj Raghuraman
Head of AI Engine, Topaz Labs
NVIDIA TensorRT for RTX empowers developers to deliver high-performance AI on Windows PCs with ease.