You’ll learn how to use Transformer-based natural language processing models for text classification tasks, such as categorizing documents. You will also learn how to leverage Transformer-based models for named-entity recognition (NER) tasks and how to analyze various model features, constraints, and characteristics to determine which model is best suited for a particular use case based on metrics, domain specificity, and available resources.
By participating in this workshop, you’ll be able to:
- Understand how text embeddings have rapidly evolved in NLP tasks such as Word2Vec, recurrent neural network (RNN)-based embeddings, and Transformers
- See how Transformer architecture features, especially self-attention, are used to create language models without RNNs
- Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results
- Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering
- Manage inference challenges and deploy refined models for live applications