NVIDIA Confidential Computing

Secure data and AI models in use.

Protect Confidentiality and Integrity of AI Workloads in Use

Data, AI models, and applications in use are vulnerable to external attacks and internal threats whether deployed on-premises, in the cloud or at the edge. NVIDIA Confidential Computing, a groundbreaking security feature introduced in the NVIDIA Hopper architecture mitigates these threats, while letting users access the unprecedented acceleration of NVIDIA H100 Tensor Core GPUs for AI workloads. Protect sensitive data and proprietary AI models from unauthorized access using strong hardware-based security.

The Benefits of NVIDIA Confidential Computing

Hardware-Based Security and Isolation

Achieve full isolation of virtual machines (VMs) on-premises, in the cloud, or at the edge. Data transfers between the CPU and H100 GPU are encrypted and decrypted at PCIe line rate. A physically isolated trusted execution environment (TEE) is created with built-in hardware firewalls that secures the entire workload on the H100 GPU.

Protection from Unauthorized Access

Protect the confidentiality and integrity of both data and AI workloads in use. Unauthorized entities, including the hypervisor, host OS, cloud provider, and anyone with physical access to the infrastructure, can’t view or modify the AI application and data during execution, protecting sensitive customer data and intellectual property.

Verifiability with Device Attestation   

Ensure that only authorized end users can place data and code for execution within the H100’s TEE. In addition, device attestation verifies that the user is talking to an authentic NVIDIA H100 GPU, that firmware hasn’t been tampered with, and that the GPU firmware was updated as expected.

Security for Right-Sized GPUs

Protect your entire AI workload running on a single H100 GPU or multiple H100 GPUs within a node. You can also physically isolate and secure AI workloads running on individual MIG instances, enabling confidential computing for multiple tenants on a single H100 GPU and optimizing utilization of infrastructure.

No Application Code Change

Leverage all the benefits of confidential computing with no code changes required to your GPU-accelerated workloads in most cases. Use NVIDIA’s GPU-optimized software to accelerate end-to-end AI workloads on H100 GPUs while maintaining security, privacy, and regulatory compliance.

Unlock New Possibilities for AI Security

Protect AI Intellectual Property

Protect AI Intellectual Property

NVIDIA Confidential Computing preserves the confidentiality and integrity of AI models and algorithms that are deployed on H100 GPUs. Independent software vendors (ISVs) can now distribute and deploy their proprietary AI models at scale on shared or remote infrastructure, including third-party or colocation data centers, edge infrastructure, and public cloud. This enables ISVs in industries like retail and manufacturing to make their AI solutions widely accessible while protecting their intellectual property (IP) from unauthorized access or modification, even from someone with physical access to the deployment infrastructure.

Security for AI Training and Inference

Training AI models to convergence is a computationally intensive, complex, and iterative process that requires operating on massive volumes of data. Once trained, these AI models are integrated within enterprise applications to infer or make predictions about new data that they’re presented with. There are a growing number of industries like finance, healthcare, and public sector where the data used for both AI model training and inference is sensitive and/or regulated, such as personally identifiable information (PII). With NVIDIA Confidential Computing, enterprises can ensure confidentiality of data during AI training and inference—whether on-premises, in the cloud, or at the edge.

AI Training Security
Secure Multi-Party Collaboration

Secure Multi-Party Collaboration

Building and improving AI models for use cases like fraud detection, medical imaging, and drug development requires diverse, carefully labeled datasets for training neural networks. This demands collaboration between multiple parties without compromising the confidentiality and integrity of the data sources. NVIDIA Confidential Computing unlocks secure multi-party computing, letting organizations work together to train or evaluate AI models and ensures that both data and the AI models are protected from unauthorized access, external attacks, and insider threats at each participating site.

Preliminary specifications, may be subject to change

Take a Deep Dive into the NVIDIA Hopper Architecture