Overview
Machine Learning (ML) is at the core of many applications of artificial intelligence. A key goal of this course series is to teach the fundamental building blocks behind supervised ML. In this second part, we will present to you a variety of machine learning algorithms such as k-nearest neighbors, classification and regression trees, random forests, and neural networks.
Which topics will be covered?
-
Theoretical understanding of different ML algorithms such as k-nearest neighbors, Classification and Regression Trees, Random Forests, and Neural Networks
-
Advantages and disadvantages of the different learners
-
Application of the learned algorithms in R and Python
What will I achieve?
-
Explain the idea of k-NN
-
Explain the idea of classification and regression trees and how random forests improve this method
-
Explain how a neural network works
-
Apply the learned ML algorithms to real-world data using R and Python
Which prerequisites do I need to fulfill?
This course is open to all who are interested. However, we recommend learners to have:
-
A strong foundation in mathematics, such as 8 years of math education in secondary schools
-
Pre-knowledge in linear algebra and analysis required (at least high school level)
-
Pre-knowledge in statistics and probability is recommended (at least high school level)
-
Basic programming skills in R or Python (e.g., through a small self-study course)
-
You have concluded the course Introduction to Machine Learning Part 1