Machine Learning and Deep Learning Series: Transformer-Based Models
Event description
- Academic events
- Free
- Professional and career development
- Science
Transformer-Based Models, the final lab of the Machine Learning and Deep Learning Series focuses on new developments in artificial intelligence, specifically transformer-based models. Participants will learn fundamental components of transformers and how they differ from traditional neural networks. The session will cover pretrained models like BERT, fine-tuning, and transfer learning, providing practical insights into their application. Participants will also explore the role of Large Language Models (LLMs) in AI advancements, the resurgence of RNNs, and discuss the future of AI. This lab equips attendees with the knowledge to understand and work with state-of-the-art models shaping the AI landscape.
During the Machine Learning and Deep Learning Open Lab Series, Namig Abbasov offers seven open labs to introduce participants to core concepts and techniques in Machine Learning (ML) and Deep Learning. These open labs will prioritize an intuitive understanding of machine learning algorithms and deep learning approaches. These are intended to complement machine learning and deep learning courses taught at ASU by focusing on intuitive explanations of difficult concepts and examples with analogical illustrations.
Presented by Namig Abbasov with the ASU Library's Unit for Data Science and Analytics team.