CommonLounge is a community of learners who learn together. Get started with the featured resources above, ask questions and discuss related stuff with everyone.
[Paper Summary + Doubts] Deep Residual Learning for Image Recognition
This is a great paper that addresses the vanishing/exploding gradient problem existing in deep neural network architecture. The solution proposed in the paper is a deep residual learning framework which allows for training extremely deep CNN models for various visual recognition tasks. The architecture consists of stacked convolutional layers, with every other layer connected with two layers below. In this way, every two layers are trained to approximate a residual function of an underlying mapping.
The claim made in the paper is that learning some underlying mapping H(x) is asymptotically approximate to learning the residual function [H(x)-x] and then adding x, but that the latter is easier to learn with several layers of a neural network. This intuition isn't very clear. Section 3.1 discusses this intuition, and I was wondering if someone could help me understand this.
Some further questions and observations:
This framework doesn't seem to have several fully connected layers at the end, as VGG/AlexNet papers d...
This paper focused on solving the degradation problem (saturation of accuracy in deeper networks). The paper's explanation is that residuals make backprop more efficient for deeper networks. That makes sense, but there's more to the story.
The self-referential formulation of ResNets leads to...
Read more… (115 words)
Read more (115 words)
How Robots Can Acquire New Skills from Their Shared Experience
At a high level, it's sort of interesting to realize that this "central" server will be a sort of very long-term memory and a collective brain for potentially millions of Autonomous systems. This idea is pretty mind blowing when you take time to realize the implications.