Deep Learning Weekly Issue #2
If you prefer to get this straight to your inbox every week drop your email address here.
welcome to the second issue of deep learning weekly.
A lot has happened this week to pad my reading list. Above all, I was delighted by the awesome tutorials about convolutional and recurrent neural networks.
I also dug a little deeper this week after, Yann Le Cun mentioned Adversarial Networks during his Quora session last week and included a paper on the topic in this issue.I hope you find some nuggets to enjoy.
I’d appreciate any feedback, input or question you might have. Of course sharing this newsletter on twitter or facebook goes a long way to support it and would oblige me greatly.
Industry Trends
- Why Intel Bought Artificial Intelligence Startup Nervana Systems.
This is hardly news at this point. One can imagine multiple uses from deployment in its data centers to a general offering of a machine learning platform. - The healing power of AI | TechCrunch
The most interesting point in this TechCrunch piece is how what makes deep learning work, namely the absence of an explicitly specified model, might also impede its success in highly regulated fields such as medicine where its black box nature clashes with requirements of explicitness.
Learning
-
An Intuitive Explanation of Convolutional Neural Networks
The best explanation of convolutional neural nets I have come across. For anyone unfamiliar with the topic start here. -
Recurrent Neural Networks for Beginners — Medium
In a similar vein, this is a solid beginner tutorial on recurrent neural networks. -
Trends in Neural Machine Translation
An informative overview of developments in the field of neural machine translation.
Interviews and Q&As
- Fran?ois Chollet - Session on Aug 15, 2016 - Quora
Fran?ois is Deep learning researcher at Google and developer of the Keras Deep Learning Library - RE?WORK Interview with Yoshua Bengio - Deep Learning Summit, Boston, 2016
Interesting Interview with Yoshua Bengio head of the Montreal Institute for Learning Algorithms, at the Boston 2016 Deep Learning summit. - AMA: We are the Google Brain team. We'd love to answer your questions about machine learning. : MachineLearning
Reddit AMA with the Google Brain, insight-generating questions on the most underrated things going on in deep learning, and interesting info about the background of the Google Brain team ranging from carpentry (seriously) over creative writing to neuroscience and psychology.
Libraries & Code
Papers & Publications
- Residual Networks of Residual Networks: Multilevel Residual Networks
This paper proposes a novel residual-network architecture, Residual networks of Residual networks (RoR), to dig the optimization ability of residual networks. - Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
An older paper that deserves renewed attention. It introduces deep convolutional generative adversarial networks, which Yann LeCun mentioned as most promising research area of deep learning in the future. Most excitingly, this technique opens up the opportunity for unsupervised learning, hopefully obsolescing the need for laboriously annotated data sets. - Convolutional Neural Fabrics
Despite the success of convolutional neural networks, selecting the optimal architecture for a given task remains an open problem. Instead of aiming to select a single optimal architecture, the authors propose a “fabric” that embeds an exponentially large number of CNN architectures. - A Convolutional Neural Network Neutrino Event Classifier
For physics nerds. There is a good summary of core CNN concepts in the beginning of the paper as well.