# Learn Machine Learning Algorithms

April 17, 2018

This course consists of 25 tutorials to learn core machine learning algorithms and their applications.

Once done, you should have a good conceptual understanding of machine learning algorithms and be able to reason about how to improve the accuracy of your predictions and reason about model performance.

**Related**: If you are a beginner, I recommend starting out with the following course instead: Learn Machine Learning: From Beginner to Expert.

**Subscribe to add this course to the top of your Home Page. Get started with the first article below.**

# Introduction to Machine Learning

We’ll start by describing what machine learning is, and introduce a simple learning algorithm: **linear regression + gradient descent**. Using this algorithm, we’ll introduce the core concepts in machine learning: *model parameters*, *cost function*, *optimization method*, and *overfitting and regularization*.

- What is Machine Learning? Why Machine Learning?
- Linear Regression
- Gradient Descent
- Generalization and Overfitting
- Strategies to Avoid Overfitting
- Types of Learning Algorithms: Supervised, Unsupervised and Reinforcement Learning

# Supervised Learning Algorithms (Simple)

In this section, we’ll see various algorithms for supervised machine learning. Different algorithms perform better for different types of data, where deciding factors would include total number of dimensions in input data, whether the data is text or numerical or a time series, whether or not the data is sparse, size of dataset, and so on. For each algorithm, we’ll explain how it works and the applications for which it produces state-of-the-art results.

- Logistic Regression
- K-nearest neighbors
- Perceptron algorithm
- Naive Bayes algorithm and text classification

# Supervised Learning Algorithms (Intermediate)

This section covers more algorithms for supervised machine learning. However, some of these algorithms require more mathematical maturity, such as Support Vector Machines and Hidden Markov Models.

- Support Vector Machine (SVM)
- Support Vector Machine (SVM) Theory
- Decision Tree
- Hidden Markov Models
- Recommendation Systems with Matrix Factorization

# Deep Learning

In this section, we’ll learn about deep learning! We’ll also compare deep learning to traditional machine learning algorithms (which we encountered in the previous section), and see how they differ. If you’re interested in learning more about deep learning, check out the comprehensive deep learning course here: Deep learning, from novice to expert, self-paced

- What is deep learning? How does it relate to machine learning?
- A First Look at Neural Networks
- Dropout (neural network regularization)

# Unsupervised Learning Algorithms

In this section, we’ll cover unsupervised learning concepts such as clustering, dimensionality reduction, the curse of dimensionality, feature extraction and feature selection. We’ll also learn some algorithms such as K-means clustering and Principal Component Analysis.

**Note**: K-nearest neighbors and matrix factorization algorithms covered in the previous sections can also be used for unsupervised learning, and in particular, for modeling similarity or relatedness between items.

- K-Means Clustering
- The Curse of Dimensionality
- Dimensionality Reduction and Principal Component Analysis
- Anomaly Detection

# Ensemble Learning

Ensemble learning is a method in which we train multiple machine learning models and combine their predictions in-order to achieve better accuracy, and reduce variance in predictions made by the model. In this section, we’ll cover a number of techniques in ensemble learning. We’ll also look into bayesian framework for machine learning, which is useful for cases when we are dealing with smaller datasets.