Enterprises need to execute language-related tasks daily, such as text classification, content generation, sentiment analysis, and customer chat support, and they seek to do so in the most cost-effective way. Large language models can automate these tasks, and efficient LLM customization techniques can increase a model’s capabilities and reduce the size of models required for use in enterprise applications.
In this course, you'll go beyond prompt engineering LLMs and learn a variety of techniques to efficiently customize pretrained LLMs for your specific use cases—without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model's internal weights. Using NVIDIA NeMo™ service, you’ll learn various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.
Learning Objectives
By participating in this workshop, you’ll learn how to:
- Apply parameter-efficient fine-tuning techniques with limited data to accomplish tasks specific to your use cases
- Use LLMs to create synthetic data in the service of fine-tuning smaller LLMs to perform a desired task
- Leverage the NVIDIA NeMo service to customize models like GPT and LLaMA-2 with ease
Datasheet (PDF 92 KB)