Foundations of Deep Learning III: Advanced Topics
The third part of the course series Foundations of Deep Learning
📊︎
Intermediate
⏱
15 hours
🏅︎
Record of Achievement
🎁︎
For free
©
CC BY-SA 4.0
🌐︎
English
Overview
The course picks up where Part III left off. It handles advanced topics in deep learning starting with Generative models such as the Variational Autoencoder, GANs and diffusion models. It then discusses ways to quantify and manage uncertainty in deep learning, which is a crucial aspect to build robust, reliable, and trustworthy models. The final chapter discusses hyperparameter optimization (HPO), which is a key process to enhance generalization performance and efficiency of deep learning models.
Which topics will be covered?
- Introduction to generative models like VAEs and GANs.
- Understanding how modern diffusion models work.
- Understanding why uncertainty is needed in Deep Learning and how to quantify it.
- Introduction to HPO methods for deep learning.
What will I achieve?
On completion of the course you will be able to:
- Explain the working mechanism behind autoencoders, regularization autoencoder techniques (e.g. sparsity, denoising), as well as the training mechanism of GANs.
- Explain the working mechanism of diffusion models.
- Motivate the study and use of uncertainty in deep learning, as well as describe techniques such as Variational Inference and MCMC.
- Define the hyperparameter optimization problem, describe blackbox methods for HPO as well as speedup techniques, and the multi-fidelity approach to HPO.
Which prerequisites do I need to fulfill?
- Good knowledge of linear algebra
- Good knowledge of probability theory
- Good knowledge of machine learning
- Concepts presented in the course: Foundations of Deep Learning I: Basics
- Concepts presented in the course: Foundations of Deep Learning II: Architectures & Methodology