Tesla

Subscribe
GPU Applications

MATLAB Acceleration on NVIDIA Tesla and Quadro GPUs

MATLAB® is a high-level language and interactive environment that enables you to use NVIDIA® GPUs to accelerate AI, deep learning, and other computationally intensive analytics without having to be a CUDA® programmer. Using MATLAB and Parallel Computing Toolbox™, you can:


 

Introduction to GPU Computing with MATLAB



 




Develop Deep Learning and Other Computationally
Intensive Analytics with GPUs

MATLAB is an end-to-end workflow platform for AI and deep learning development. MATLAB provides tools and apps for importing training datasets, visualization and debugging, scaling training CNNs, and deployment.

Scale up to additional compute and GPU resources on desktop, clouds, and clusters with a single line of code.

 


Use GPU Coder to generate optimized CUDA code from MATLAB code for deep learning, embedded vision, and autonomous systems. The generated code automatically calls optimized NVIDIA CUDA libraries, including TensorRT, cuDNN, and cuBLAS, to run on NVIDIA GPUs with low latency and high-throughput. Integrate the generated code into your project as source code, static libraries, or dynamic libraries, and deploy them to run on GPUs such as the NVIDIA Volta®, NVIDIA Tesla®, NVIDIA Jetson®,and NVIDIA DRIVE®.

Learn More

Deploy Generated CUDA Code from MATLAB for
Inference Deployment with TensorRT
Featured Tesla Partners and Resellers

For a complete list of Tesla Preferred Providers, click here.