This 29-part course consists of tutorials on ML concepts and algorithms, as well as end-to-end follow-along ML examples, quizzes, and hands-on projects.
Once done, you will have an excellent conceptual and practical understanding of machine learning and feel comfortable applying ML thinking and algorithms in your projects and work.
The primary objectives of this course are:
- Understand the core concepts in machine learning — model parameters, optimization, generalization, regularization, and so on.
- Understand some popular machine learning algorithms - this course covers 8 ML algorithms, I recommend you learn at-least 5 well
- Implement machine learning algorithms from scratch (recommend doing at-least 2)
- Apply machine learning algorithms for prediction tasks (recommend doing at-least 2)
- Do a more extensive machine learning project (recommend doing at-least 1)
Related course: Learn Data Science with Python
Subscribe to add this course to the top of your Home Page. Get started with the first article below.
We’ll start by describing what machine learning is, and introduce a simple learning algorithm: linear regression + gradient descent. Using this algorithm, we’ll introduce the core concepts in machine learning: model parameters, cost function, optimization method, and overfitting and regularization. This section ends with a visual review of these concepts and a tutorial on the different types of machine learning problems.
- What is Machine Learning? Why Machine Learning?
- Linear Regression
- Gradient Descent
- Hands-on Assignment: Implementing Linear Regression with Gradient Descent
- Generalization and Overfitting
- Strategies to Avoid Overfitting
- Quiz: Train vs Test Data, Overfitting, Cost functions
- A Visual Introduction (and Review) of Machine Learning
- Types of Learning Algorithms: Supervised, Unsupervised and Reinforcement Learning
- Quiz: Types of Machine Learning problems
In this section, we’ll see various algorithms for supervised machine learning. Different algorithms perform better for different types of data, where deciding factors would include total number of dimensions in input data, whether the data is text or numerical or a time series, whether or not the data is sparse, size of dataset, and so on. For each algorithm, we’ll explain how it works and the applications for which it produces state-of-the-art results.
The algorithms don’t depend on each other and require different amounts of mathematical maturity. If you have trouble with some of the mathematically heavy ones (such as SVMs), feel free to skip or take extra time.
- Logistic Regression
- Quiz: Logistic Regression
- K-nearest neighbors
- Hands-on Project: Digit classification with K-Nearest Neighbors and Data Augmentation
- Support Vector Machine (SVM)
- Quiz (and Assignment): Support Vector Machines
- Learning = Representation + Evaluation + Optimization
In this section, we’ll learn about deep learning! We’ll also compare deep learning to traditional machine learning algorithms (which we encountered in the previous section), and see why it is different. If you’re interested in learning more about deep learning, check out the comprehensive deep learning course here: Deep learning, from novice to expert, self-paced
- What is deep learning? How does it relate to machine learning?
- A First Look at Neural Networks
- Quiz: Deep Learning and Neural Networks
In this section, we’ll cover unsupervised learning concepts such as clustering, dimensionality reduction, the curse of dimensionality, feature extraction and feature selection. We’ll also learn some algorithms such as K-means clustering and Principal Component Analysis.
Note that K-nearest neighbors and matrix factorization algorithms covered in the previous section can also be looked at as unsupervised learning algorithms for modeling similarity / relatedness between items.
- K-Means Clustering
- The Curse of Dimensionality
- Dimensionality Reduction and Principal Component Analysis
To round-up, we give an end-to-end example of applying ML for predicting whether or not a patient has diabetes. In addition using ML algorithms, the example goes through the various stages typical of the ML workflow, such as data exploration and interpreting the ML model.
Then, there is an article which contains a list of 10 project ideas (including links to datasets and suggested algorithms). You should pick one of these tasks and do at-least one full project on your own.
The last lesson is a summary of a brilliant paper by Professor Pedro Domingos. It discusses a number of key lessons ML practitioners have learned over the years. It’s a great way to end the course, touching upon various parts and discussing the relations between them. It is also essential wisdom to take with you as you carry on your ML journey!