Foundations of Deep Learning II: Architectures & Methodology
The second part of the course series Foundations of Deep Learning
📊︎
Intermediate
⏱
12 hours
🏅︎
Record of Achievement
🎁︎
For free
©
CC BY-SA 4.0
🌐︎
English
Overview
This course picks up where Part I left off. It provides an introduction to the basic types of neural networks used in deep learning such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers. The final chapter introduces the concept of self-supervised learning and it also discusses some practical methods for designing and implementing deep models, along with best practices, and common architectures.
Which topics will be covered?
- Introduction to Convolutional Neural Networks
- Introduction to Recurrent Neural Networks
- Introduction to Attention and Transformers
- Introduction to practical deep learning methodology
What will I achieve?
On completion of the course you will be able to:
- Explain what a convolutional neural network is and how they operate, including the different types of miscellaneous convolutions.
- Explain what a recurrent neural network is, describe their computational graphs, the main idea behind backpropagation through time, how an LSTM differs from vanilla RNN and what the gating mechanisms do in an LSTM, also explain the main ideas behind GRUs and Echo State Networks.
- Explain the attention mechanism and the transformer architecture, its key components such as the self-attention layers and the encoder-decoder structure.
- Describe different normalization techniques, transfer learning and self supervised learning, along with practical design considerations and debugging strategies.
Which prerequisites do I need to fulfill?
- Good knowledge of linear algebra
- Good knowledge of probability theory
- Good knowledge of machine learning
- Concepts presented in the course: Foundations of Deep Learning I: Basics