Together, NVIDIA and Google Cloud are helping organizations achieve faster results to solve data challenges—without massive expenditures or complex infrastructure management. Leverage NVIDIA GPUs to accelerate deep learning, analytics, science simulation, and other high-performance computing (HPC) workloads and NVIDIA® RTX Virtual Workstations with Google Cloud to accelerate rendering, simulation, and graphics-intensive workloads from anywhere.
NVIDIA AI Enterprise is a secure, end-to-end, cloud native suite of AI software, enabling organizations to solve new challenges while increasing operational efficiency. It accelerates the data science pipeline and streamlines the development, deployment and management of predictive AI models to automate essential processes and gain rapid insights from data. With an extensive library of full-stack software including AI solution workflows, frameworks, pretrained models and infrastructure optimization. Global enterprise support and regular security reviews ensure business continuity and AI projects are successful and stay on track. The NVIDIA AI Enterprise marketplace offer on Google Cloud includes a VMI which provides a standard, optimized run time for easy access to the NVIDIA AI Enterprise software and ensures development compatibility between clouds and on premises infrastructure. Develop once, run anywhere.
Google Cloud Anthos is an application modernization platform powered by Kubernetes. For customers looking for a hybrid architecture and dealing with high on-prem demand, Anthos is designed to combine the ease of getting started in the cloud with the security of an on-premises solution. It’s available as a hybrid platform for NVIDIA GPU workloads in the cloud, on premises, and at the edge.
Anthos is now available both for bare-metal and vSphere virtualized deployments. Supporting both NVIDIA DGX™, as well as server systems equipped with NVIDIA T4, V100, or A100 Tensor Core GPUs. Depending on your application needs and server infrastructure, you can choose the best configuration for optimal deployment.
View User Guide for NVIDIA GPUs with Google Cloud Anthos
NVIDIA DGX A100 is the world’s leading AI system purpose-built for the unique demands of enterprise. Now organizations can build a hybrid AI cloud that delivers easy access to computing power that spans their existing DGX on-prem infrastructure in combination with NVIDIA GPUs within Google Cloud. Google Cloud Anthos on NVIDIA DGX A100 lets organizations complement the deterministic, unparalleled performance of their dedicated DGX system infrastructure with the simplicity and elasticity of cloud AI compute.
Read Blog: How to Avoid Speed Bumps and Stay in the AI Fast Lane with Hybrid Cloud Infrastructure (November 30. 2020)
NVIDIA® A100 delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.
A100 Performance on Altair’s ultraFluidX™ (PDF 503 KB)
Listen to the Kubernetes Podcast from Google on Accelerators and GPUs (31:00 Minutes)
NVIDIA A100
Listen to the Google Podcast on A100 with NVIDIA Bryan Cantanzaro (42:46 Minutes)
NVIDIA T4
Listen to the Google Podcast on T4 with NVIDIA’s Ian Buck and Kari Briski (35:56 Minutes)
NVIDIA V100
NGC provides simple access to pre-integrated and GPU-optimized containers for deep learning software, HPC applications, and HPC visualization tools that take full advantage of NVIDIA A100, V100, P100 and T4 GPUs on Google Cloud Platform. It also offers pre-trained models and scripts to build optimized models for common uses cases like classification, detection, text-to-speech, and more. Now, you can deploy production-quality, GPU-accelerated software in just minutes.
NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. Optimize neural network models, calibrate for lower precision with high accuracy, and deploy models to Google Cloud. And because it’s tightly integrated with TensorFlow, you get TensorFlow’s flexibility with TensorRT’s powerful optimizations.
NVIDIA GPUs in Google Kubernetes Engine turbocharge compute-intensive applications like machine learning, image processing, and financial modeling by scaling to hundreds of GPU-accelerated instances. Package your GPU-accelerated applications into containers and benefit from the massive processing power of Google Kubernetes Engine and NVIDIA A100, V100, T4, P100 or P4 GPUs whenever you need them, without having to manage hardware or virtual machines (VMs).
NVIDIA Quadro Virtual Workstations for GPU-accelerated graphics enable creative and technical professionals to maximize their productivity from anywhere by accessing the most demanding professional design and engineering applications from the cloud. Designers and engineers now have the flexibility to run virtual workstations on NVIDIA T4, V100, P100, and P4 GPUs directly from Google Cloud or from the Google Cloud Platform marketplace, which includes support for Windows Server 2016, Windows Server 2019, and Ubuntu 18.04.
AI is the most important technology development of our time, with the greatest potential to help society. As the world’s leading cloud providers deploy the world’s best AI platform with NVIDIA GPUs and software, we’ll see amazing breakthroughs in medicine, autonomous transportation, precision manufacturing, and much more.
– Jensen Huang, Founder and CEO, NVIDIA
NVIDIA is a strategic partner for Google Cloud and we are excited for them to innovate on behalf of customers.
– Tim Hockin, Principal Software Engineer, Google Cloud
[GPUs] with Kubernetes provide a powerful, cost-effective, and flexible environment for enterprise-grade machine learning. Ocado chose Kubernetes for its scalability, portability, strong ecosystem, and huge community support... It also has great ease-of-use and the ability to attach GPUs to provide a huge boost over traditional CPUs.
– Martin Nikolov, Research Software Engineer, Ocado