I am trying a new initiative — a-paper-a-week. This post is the summary of the very first paper I read as a part of this initiative. The idea behind writing such posts is to have a summary of these papers — for my future self and for others. I believe that summarizing an academic writing helps to understand it better.
For my first week, I picked up the paper titled Never-Ending Learning. It is a part of Read The Web project at Carnegie Mellon University. It introduces a new machine learning paradigm called never-ending learning. In most machine learning applications, a system aims to learn a single function or data model from a given data set. But in this new paradigm, the system learns the different type of functions from years of experience in a way that previously learned knowledge makes way for accumulating new types of knowledge. Stagnation and performance plateaus are avoided by the ability to formulate new learning tasks.
A never-ending learning problem has 2 components — a...
Paper Summary: Human-Level Control Through Deep Reinforcement Learning
Why this paper?
First scalable successful combination of reinforcement learning and deep learning. Result outperforms preceding approaches (at Atari games). Only uses pixel data + game scores + number of actions and the same architecture across different games.Why reinforcement learning? That is how animals and humans seem to make decisions in their environments as evidenced by parallels seen in neural data of neurons and temporal difference RL algorithms.
What about previous approaches?
Handcrafted features. When non-linear approximations of Q are used, values are unstable. Other stable neural nets approaches were there, like Q-iteration. They are slow, though - don’t work for large networks.
What are the outcomes?
Tested the method against best performing approaches at the time and a professional game tester. Used 49 different Atari g...
Paper Summary: Evaluating Prerequisite Qualities for Learning End-to-end Dialog Systems
The paper presents a suite of benchmark tasks to evaluate end-to-end dialogue systems such that performing well on the tasks is a necessary (but not sufficient) condition for a fully functional dialogue agent.