This section introduces you to NLP, discusses its applications and challenges, and the various approaches to NLP. Then, we provide a brief overview of machine learning and test your understanding of some of its core concepts.
This 16-part course consists tutorials to learn deep learning and applied to natural language processing, also called Deep NLP. The course also includes hands-on assignments and projects for you to implement neural networks and solve NLP tasks. You can think of this course as a "Free Online Nano Book".
Why Deep NLP? Natural language processing comprises of a set of computational techniques to understand natural languages such as English, Spanish, Chinese, etc. Traditional NLP techniques proved successful on tasks like filtering spam email and sentiment classification, but don't perform as well on advanced tasks such as language translation, question answering, speech recognition and music composition (the fun stuff!). Deep NLP represents the state-of-the-art for these applications. It powers systems like the Google Assistant and Amazon Alexa.
The primary objectives of this course are as follows:
Note: If you are not familiar with machine learning, please go through section 1 of the Deep Learning course, titled "An Introduction to Machine Learning". This course provides only a brief overview of machine learning concepts.
Subscribe to add this course to the top of your Home Page. Get started with the first article below.
We'll learn about how deep learning differs from machine learning, i.e. what deep means and why it is important. We'll learn specifically what neural networks look like and how they are trained using back-propagation. We'll also introduce the concept of computational graphs which is how neural networks are implemented in popular deep learning libraries such as TensorFlow, Torch, etc and then train our first neural network. We'll end the section with learning regularization techniques specific to deep learning, such as early-stopping and dropout.
We'll start by introducing Recurrent Neural Networks (RNN), which are neural networks with loops in them. We'll see how RNNs can be used for inputting and outputting sequences and how they maintain an internal state. Then we'll introduce Long Short-Term Memory (LSTM) networks and see how they overcome some significant shortcomings of general RNNs. Along the way, we'll also introduce the concept of word embeddings. We'll implement an RNN to generate text one character at a time, and also see how this model can be used for performing state-of-the-art language translation.