CommonLounge is a community of learners who learn together. Get started with the featured resources above, ask questions and discuss related stuff with everyone.
[Paper Summary + Doubts] Deep Residual Learning for Image Recognitionby Simanta Gautam
This is a great paper that addresses the vanishing/exploding gradient problem existing in deep neural network architecture. The solution proposed in the paper is a deep residual learning framework which allows for training extremely deep CNN models for various visual recognition tasks. The architecture consists of stacked convolutional layers, with every other layer connected with two layers below. In this way, every two layers are trained to approximate a residual function of an underlying mapping.
The claim made in the paper is that learning some underlying mapping H(x) is asymptotically approximate to learning the residual function [H(x)-x] and then adding x, but that the latter is easier to learn with several layers of a neural network. This intuition isn't very clear. Section 3.1 discusses this intuition, and I was wondering if someone could help me understand this.
Some further questions and observations:
This framework doesn't seem to have several fully connected layers at the end, as VGG/AlexNet papers d...
Read more…(219 words)
Channels for discussionby Prafulla Dhariwal
We are starting with these initial set of channels. These can be expanded with the needs of the group.
Pinned: Stuff that will be useful throughout the semester. For example - surveys, important links like meeting schedule's, and othe valuable discussions.
Meetings: Discussions for every meeting we have. Currently, we meet once a week as a paper reading group. Later, we might also hold workshop / tutorial meetups.
Research: Discuss new papers, datasets, tools etc. The aim of this channel is to spark discussion about active ML research.
Learning: Ask questions here if you want to get help to start learning a new topic.
Tensorflow: Questions specifically about Tensorflow. Similar channels can be created for Keras, Torch etc if needed.
Projects: Pitch projects and try to find p...
Read more…(149 words)
ML Reading Group Fall 2016by Keshav Dhandhania
Hey! Welcome (back) to the ML reading group. Indicate what type of reading you're interested in doing and suggest papers here! We meet weekly to engage in discussions about various areas of machine learning.
Read more…(34 words)
Can you help with my research? And make some $?by Rebecca Wettemann
Take a 6-minute survey on DL and the first 200 respondents will be entered in a drawing for a $1500 Amex gift cheque.
Or email me at firstname.lastname@example.org if you'd be willing to be interviewed for my research and get a copy of the analysis. Thank you!
Read more…(52 words)
Paper Summary: Human-Level Control Through Deep Reinforcement Learningby Ali-Amir Aldan
Why this paper?
First scalable successful combination of reinforcement learning and deep learning. Result outperforms preceding approaches (at Atari games). Only uses pixel data + game scores + number of actions and the same architecture across different games.Why reinforcement learning? That is how animals and humans seem to make decisions in their environments as evidenced by parallels seen in neural data of neurons and temporal difference RL algorithms.
What about previous approaches?
Handcrafted features. When non-linear approximations of Q are used, values are unstable. Other stable neural nets approaches were there, like Q-iteration. They are slow, though - don’t work for large networks.
What are the outcomes?
Tested the method against best performing approaches at the time and a professional game tester. Used 49 different Atari g...