This post is an intuitive introduction to computational graphs and backpropagation - both important and core concepts in deep learning for training neural networks.
Doesn't Tensorflow perform backprop automatically?
Deep learning frameworks such as Tensorflow and Torch perform backpropagation and gradient descent automatically. So, why do we need to understand it?
The answer is, because although the calculations are abstracted out by libraries, gradient descent on neural networks is susceptible to various issues such as vanishing gradients, dead neurons, etc. When we are trying to train a neural network and we face these problems, we will not be able to resolve it unless we understand whats going on behind the scenes.