Survey:
- When Will AI Exceed Human Performance? Evidence from AI Experts (link) is a survey of AI experts asking them for predictions on when various high societal impact technologies will become available, such as translation (2024), driving a truck (2027), working in retail (2031) and working as a surgeon (2053). It is interesting to note that Asian experts think these events will happen sooner than their North American counterparts.
Articles and News:
- DeepMind's AlphaGo won 3-0 vs Ke Jie, the world #1 go player by current rankings. DeepMind is now concluding its foray into Go, and as a final step, will release a lot of the technical learnings and Go game strategy learnings (from studying AlphaGo). Read more here: AlphaGo's next move
- Google published a cool article on The Machine Intelligence Behind Gboard
- Andrej Karpathy did an analysis on ICML accepted papers institution statistics. Alphabet Inc. tops the list with 60+ papers, followed by Microsoft with 33. Another interesting fact was that although the biggest players are industries, academia still contributes 75% of the research papers.
Datasets:
- DeepMind released The Kinetics Human Action Video Dataset
Projects:
- OpenAI is going to be implementing a large number of baseline Reinforcement Learning models for easier comparison, starting with DQN. You can read the announcement here and check-out the code here.
- I also came across a couple of completely very interesting GitHub projects that I'd definitely recommend everyone to check out.
Tutorials:
- Lecture slides from Prof. Fei-Fei Li's class, comparing TensowFlow with PyTorch
- A new PyTorch Tutorial, where you learn from examples
Research:
- We have two very interesting papers regarding optimization of deep learning models. The first one concludes that adaptive gradient descent methods (such as AdaGrad) generalize significantly worse even though their training errors are similar, and as such is asking the DL community to re-evaluate whether it should continue using adaptive gradient descent methods: [1705.08292v1] The Marginal Value of Adaptive Gradient Methods in Machine Learning. The second paper focuses on generalization with respect to batch size, where they study methods to avoid poor generalization which still minimizing the total number of updates: [1705.08741v1] Train longer, generalize better: closing the generalization gap in large batch training of neural networks.
Till next time. :)