Tutorials have been chosen to maximize learning curve, i.e. learn the most in the shortest amount of time. Tutorials cover topics from basic deep learning all the way to research done within the last 1 year! They cover significantly more material than a typical deep learning course and take lesser time.
Expected time to completion: 4-6 weeks (20-30 sessions)
This list is a collection of the summaries of the top research papers in Deep Learning. A lot of us who have read some of these papers but not the others, and this list tracker and the available summaries help you get to the rest of them.
Expected time to completion: 2-3 weeks (10-15 sessions)
Hi, I am Shagun Sodhani, a computer science graduate from Indian Institute of Technology (IIT), Roorkee. Presently, I am working with the Analytics and the Data Science team at Adobe Systems. In this role, I actively contribute to solving novel problems in the domain of Machine Learning and Natural Language Processing and developing valuable solutions for Adobe. Recently, I also won the Outstanding Young Engineers Award at Adobe Systems.
Along with my full-time commitments at Adobe, I have worked as a teaching assistant with Databricks for Data Science and Engineering with Apache® Spark™ MOOC series. The course was designed by faculty from UC Berkeley, UC Los Angeles and Databricks and was offered on the edX platform. I regularly attend tech-talks and meetups as well. I have delivered talks and workshops at events like PyCon India 2016 and Big Data Training Program, IIT Roorkee (organised by Dept. of Science and Technology, Govt. of India). Since August 2015, I have committed myself into reading and summarising one research paper every week which has helped me to develop a good understanding of Machine Learning and related domains.
Moderator note: We are very excited to have Shagun d...
This paper reports on a series of experiments with CNNs trained on top of pre-trained word vectors for sentence-level classification tasks. The model achieves very good performance across datasets, and state-of-the-art on a few. The proposed model has an input layer comprising of concatenated 'word2vec' embeddings, followed by a single convolutional layer with multiple filters, max-pooling over time, fully connected layers and softmax. They also experiment with static and non-static channels which basically implies whether they finetune word2vec embeddings or not.
Very simple yet powerful model formulation, which achieves really good performance across datasets.
Jeff Dean is a Google Senior Fellow in the Research Group, where he leads the Google Brain project. He spoke to the YC AI group this summer. This talk is very interesting for those trying to update themselves with a high level overview of the state of the art in Machine Learning today.
My favorite part of the talk: The section on Learning to Learn.