MLBP 8: Uber AI Open Sources Pyro- Probabilistic Deep Learning in Python
Spotlight Machine Learning Articles
Uber AI Labs Open Sources Pyro, A Deep Probabilistic Programming Language Uber AI Labs open sources Pyro, their probabilistic programming language. Pyro is a tool for deep probabilistic modeling, unifying modern deep learning and Bayesian modeling.
Machine Learning Blueprint's Take
Deep Learning methods have had such success in recent years there is a risk of new ML developers moving away from other fundamental techniques in the the ML toolbox. Principal among these is using probabilistic programing to integrate variational inference methods for understanding uncertainty and Bayesian based inference. Geometric Intelligence (now Uber AI Labs since the acquisition by Uber one year ago) is open sourcing Pyro in an effort to enable the wider community to scalably marry both of these powerful sets of techniques in future machine learning development.
Neural Network Feature Visualization A visual explanation of, the components of Deep Learning, using trained networks to generate images, allowing you to toggle the constraints and see the effect.
PlaidML: Open Source Deep Learning for Every Platform Vertex announces PlaidML an open source Deep Learning engine focused on universal support for all GPU hardware and integration software. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel, in addition to front end support with Keras for neural net development on top of Plaid.
Machine Learning Blueprint's Take
In the past few years we have seen several moves towards making GPU acceleration easier for ML developers and Data Scientists, enabling focus on the modeling and development over protracted yak shaving exercises in GPU integration. Platforms like this stand to enable the kind of abstraction in Deep Learning dev that virtualization has done to facilitate so many areas of engineering and development over the past 2 decades.
Learning Machine Learning
How to Understand Simpson’s Paradox: An Explanation and Why it Matters A walkthrough of the famous Simpson’s paradox. A must read for any Data Scientist.
Time to Learn about ResNets? How ResNets help solve the vanishing gradient problem and why that's important to Deep Learning.
Ian Goodfellow Discusses Numerical Conditioning Considerations While Training Deep Networks Numerical conditioning can be an overlooked but critical aspect to implementing robust machine learning algorithms; this video dives into how mathematics operates in the machine chips.
The Often-Overlooked Random Forest Kernel
Gaussian Distributions are Soap Bubbles: Understanding Distributions in High-Dimensions
How do CNNs Deal with Position Differences?
Interesting Research
Accountability of AI Under the Law: The Role of Explanation A response to the EU’s new regulations requiring explanation of all algorithm generated decisions impacting consumers. This paper explores culpability of machine learning algorithms, by first defining circumstances which justify an explanation of the outcome and what a reasonable explanation would entail. The authors explore the technical means of deriving these explanations from algorithms without sacrificing performance. They point out the importance of judiciously balancing explainability, complexity, and performance in pursuit of encouraging innovation while protecting consumer fairness and safety.
[1705.06640] DeepXplore: Automated Whitebox Testing of Deep Learning Systems As neural networks make their way into increasingly systems-critical environments, the need for robustness and reliability is imperative. This research team presents a new system modeled around code-coverage in traditional software engineering that discovers, triggers, and labels different types of erroneous behaviors in deep learning systems. While traditional adversarial data may only trigger a small portion of corner cases, the team uses a joint-optimization approach to generate test data that maximizes both differential behaviours between deep learning systems and “neuron coverage” (ratio of activated neurons to total neurons for a set of inputs). DeepXplore discovers previously unfound corner-cases and improves classification accuracy by 3% when that system is retrained with the test data.
[1710.10777] Understanding Hidden Memories of Recurrent Neural Networks
[1703.09039] Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Machine Learning News Links
Salesforce’s Einstein Labs Creates Methodology for Generating Entire Sections of Text Rather than Single Words A new method for deep sequence-to-sequence output generation, such as machine translation, speeds up the process by computing the entire output in parallel.
Announcement of TensorFlow Lite. Google announces a release of the developer preview of TensorFlow Lite for mobile and embedded devices. TF light is intended to be the successor of TF mobile, enabling the efficient execution of machine learning models now on both mobile and embedded devices.
Tangent: Source-to-Source Debuggable Derivatives. Google’s new tool for auto-differentiation allows for much better user visibility into gradient computations, as well as easy user-level editing and debugging of gradients with portability into TensorFlow and other DL frameworks.
Bitcoin Mining ASIC Producer BITMAIN Launches SOPHON Tensor Processors and AI Solutions
More Evidence Humans and Machines Perform Better Together
A New Chatbot from Andrew Ng Helps with Depression
Python NLP Package spaCy Gets a Big Overhaul with Deep Learning Tools
Technical Development and Yield Engineer at Intel Corporation
7 年Amrita Sundari Veerabagu