CommonLounge Archive

Learn Deep Learning for Natural Language Processing

March 27, 2018

This 20-part course consists tutorials to learn deep learning and applied to natural language processing, also called Deep NLP. The course also includes hands-on assignments and projects for you to implement neural networks and solve NLP tasks. You can think of this course as a “Free Online Nano Book”.

Why Deep NLP? Natural language processing comprises of a set of computational techniques to understand natural languages such as English, Spanish, Chinese, etc. Traditional NLP techniques proved successful on tasks like filtering spam email and sentiment classification, but don’t perform as well on advanced tasks such as language translation, question answering, speech recognition and music composition (the fun stuff!). Deep NLP represents the state-of-the-art for these applications. It powers systems like the Google Assistant and Amazon Alexa.

The primary objectives of this course are as follows:

  1. Understand what machine learning is, and learn the gradient descent algorithm.
  2. Understand what deep learning is, and how deep learning differs from and relates to machine learning.
  3. Understand what neural networks are and how they are trained using back-propagation. (and train your own neural network).
  4. Understand the concept of computational graphs, a core idea (often overlooked in DL courses) foundational to understanding and implementing all sorts of complex neural network architectures.
  5. Understand how Recurrent Neural Nets work and how to generate text and perform language translation.

Prerequisites: Python, Linear Algebra, Statistics and NumPy and Calculus 1 (differentiation and chain rule).


Subscribe to add this course to the top of your Home Page. Get started with the first article below.


Natural Language Processing

This section introduces you to NLP, discusses its applications and challenges, and the various approaches to NLP.

Machine Learning and Gradient Descent

We’ll start by describing what machine learning is, and introduce a simple learning algorithm: linear regression + gradient descent. Using this algorithm, we’ll introduce the core concepts in machine learning: model parameters, cost function, optimization method, and overfitting and regularization.

Deep Learning and Neural Networks

We’ll learn about how deep learning differs from machine learning, i.e. what deep means and why it is important. We’ll learn specifically what neural networks look like and how they are trained using back-propagation. We’ll also introduce the concept of computational graphs which is how neural networks are implemented in popular deep learning libraries such as TensorFlow, Torch, etc and then train our first neural network. We’ll end the section with learning regularization techniques specific to deep learning, such as early-stopping and dropout.

Deep Natural Language Processing with LSTM Networks

We’ll start by introducing Recurrent Neural Networks (RNN), which are neural networks with loops in them. We’ll see how RNNs can be used for inputting and outputting sequences and how they maintain an internal state. Then we’ll introduce Long Short-Term Memory (LSTM) networks and see how they overcome some significant shortcomings of general RNNs. Along the way, we’ll also introduce the concept of word embeddings. We’ll implement an RNN to generate text one character at a time, and also see how this model can be used for performing state-of-the-art language translation.


© 2016-2022. All rights reserved.