AI/Machine Learning in 2017 - An apex capability perspective
AI/Machine Learning from a business development perspective can be split into apex capabilities (think about this as the capabilities being explored by the few exploring prospectors) and the business scalable (here and now) capabilities that are applicable to Enterprise and Industry Verticals in 2018-2019. The rapid democratization of AI has brought Deep Learning to the business scalable side. However, enterprises could derive tactical value by clearly understanding the art of the plausible and discerning the gap between plausible and possible.
My day job is associated with the business scalable side of AIML but I have to be tuned constantly to the apex side. I wanted to briefly share the top 5 AIML 2017 apex capabilities I found interesting- albeit IMHO will require a few more iterations before becoming business scalable.
(BTW, what is expressed by me here is of my own interest and is in no way reflective of my employer).
1) Facebook's Pytorch was a valuable 2017 Natural Language Processing addition to the practitioner'stool chest to deal with dynamic and recurrent structures that are hard to declare in a static graph frameworks such as Tensorflow.
2) 2017 saw the introduction of machine translation with the Attention mechanism dispensing with the need for recurrence and convolutions entirely allowing significant reduction of training costs.
3) Some of the most interesting apex capability explorations in 2017 were tied to practices associated with Reinforcement Learning.
2017 saw the emergence of evolutionary algorithms from Ubers AI Labs as an alternative to Stochastic Gradient Descent method to train deep neural networks for reinforcement learning (RL) problems.
2017 was also the year of "regret minimization"! On the heels of Deepstack (the first AIML system capable of beating professional poker players at heads-up no-limit Texas hold'em poker) came Libratus! Libratus introduced AIML practitioners to an algorithm called counterfactual regret minimization for playing Poker; initially playing at random, and eventually, after several months of training and trillions of hands of poker, reaching a level where the best humans were beaten and new play patterns even unknown to the experts were synthesized.
As intuitively explained by Michael Johanson the way regret minimization algorithm improves over time
"is by summing the total amount of regret it has for each action at each decision point, where regret means: how much better would I have done over all the games so far if I had just always played this one action at this decision, instead of choosing whatever mixture over actions that my strategy said I should use? Positive regret means that we would have done better if we had taken that action more often. Negative regret means that we would have done better by not taking that action at all. After each game that the program plays against itself, it computes and adds in the new regret values for all of its decisions it just made. It then recomputes its strategy so that it takes actions with probabilities proportional to their positive regret. If an action would have been good in the past, then it will choose it more often in the future."
Notable in 2017 was the refinement of Reinforcement learning ( through general-purpose methods) from games of self-play tabula rasa - i.e. in the absence of preconceived ideas or predetermined goals- showed significant improvement with AlphaZero not only mastering Go, but also Chess and Shogi (David Silver et al Dec. 2017 paper)
4) 2017 saw the continuing exploration of Generative Adversarial Networks for Art
5) In the application apex capability space here are the top 3 I found interesting:
- Unsupervised - an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character
- Virtual agents developing their own language to-communicate - think of the myriads of Industrial protocols that need cross translation bridging to make IoT work for example
- Training autonomous driving
Where do you see AI/Machine Learning n the ongoing evolution of apex capabilities? Do share your thoughts - drop me a note privately or via the comment section below.
About the Author:
Madhu cherishes the opportunity to learn and collaborate; he has three decades of experience on how to nurture the emergence of beachhead market ideations worldwide. Note that what is expressed by Madhu here is of his own interest and is in no way reflective of his employer.
Leader Automations Business | AI Agents, Cloud Services, Machine Learning
7 年Thank you Mourad Veeneman for stopping by! BTW, for the NIPS aficionados exploring domain specific apex capabilities: (Anugraha Raman, Achutha Raman) Over the last week I had a chance to catch up on David Abel's (https://www.dhirubhai.net/in/davidmabel/) NIPS 2017 notes - definitely worth a look > https://cs.brown.edu/~dabel/blog/posts/misc/nips_2017.pdf