Go for the Intelligent Enterprise
An historic event unfolded in March, 2016. The victory of the program AlphaGo over professional gamer Lee Sedol in the Google DeepMind Challenge demonstrated how far artificial intelligence (AI) has come: “Go's simple rules and elaborate possibilities have made it one of the most sought-after milestones in the field of AI research,” writes Sam Byford of The Verge. The idea of computers learning autonomously has been around for decades. So what has changed? Why has machine learning gained so much ground in recent years?
Why Machine Learning is Possible Now
Increased computing power has made machine learning possible, at last. Driven by gaming, graphic processing units (GPUs) have recently improved performance at the level of parallel computation of simple operations, most commonly used by deep learning algorithms. Together with the wide adoption of multi-core architectures, as well as in-memory databases, this has paved the way for extremely efficient implementations of machine learning algorithms.
Another reason that machine learning is possible is big data. Enormous data sets provided by various sources (e.g. text, images, geospatial data) are the basis for training machines and allowing them to learn.
Machine Learning Process
Take Facebook as an example. The ability to tag individual faces on pictures (with names) has led to the largest database of faces in the world. Facebook can teach and train machines to learn in terms of visual recognition. The more data the machine gets, the better it is able to recognize faces.
Moreover, basic research in machine learning has led to more sophisticated learning algorithms and a better understanding of the basic principles of learning itself. Fundamental algorithms, such as artificial neural networks, mimic the human brain. One can imagine a network of neuron-like units resembling the synapses of the brain. These networks can learn complex, non-linear structures in the input data and allow machines to acquire capabilities such as seeing, reading, writing, listening, and talking. This is done by applying supervised learning techniques in the ongoing machine training phase.
Reinforcement Learning (RL) extends supervised learning by modeling actions and feedback (i.e. reward or punishment) between the learning algorithm and the environment to extend the spectrum of these abilities to complex tasks such as driving a car or playing Go. On the basis of machine learning algorithms, machines can be trained to interpret extraordinarily sophisticated situations in the future.
Lastly, machine learning is becoming mainstream because it’s easier to apply. This is due to the vast number of free, high-quality, open-source software packages which make machine learning accessible to a large audience of data scientists and developers. The same is true for open-access online resources, such as massive open online courses (MOOCS), books and blogs about machine learning.
Machine Learning in Business Software
Go computers or social-media face recognition are harbingers of a fundamental shift in consumer and business software. Tractica forecasts: “the market for AI systems for enterprise applications will increase from $202.5 million in 2015 to $11.1 billion by 2024, expanding at a compound annual growth rate of 56.1%.” Soon, machine learning will be an integral part of enterprise solutions – making machines our digital co-workers.
Machines can already “see,” meaning they recognize objects, such as products in images and videos. Imagine what this means to pharmaceutical companies. They need to ensure that certain chemicals aren’t stored too closely to each other to prevent a chemical reaction. With the help of a mobile app, warehouse workers can take photos with their smartphones and, based on a machine’s image recognition capabilities, get instant feedback from the integrated ERP system about whether the item is stored correctly or not.
Machines can also “read” and “understand” text. In the realm of human resources, for instance, companies spend up to 60% of their time shortlisting candidates for a specific job. Machine learning can support recruiters with automatic CV-matching to identify the best candidates for a job, or the best job for a promising applicant. This enables recruiters to spend more time on interviewing candidates rather than sifting manually through thousands of CVs.
Machines can “write.” They can analyze structured and unstructured contextual information and automatically generate reports. Take the insurance sector. Instead of agents screening every claim, machines can make a preliminary decision in simple insurance cases and prepare a response letter. This allows insurance companies to process claims more quickly and increase productivity.
Finally, machines can “listen” and “talk.” Systems can analyze the human voice in about 40 dimensions, say, pace, volume, monotonousness, etc. This is key for customer service. Imagine a customer call in which a client talks to a chat bot, and gradually, he gets angrier. By collecting voice data in different contexts, the machine can improve its cognitive ability to monitor the tone of conversations and, more importanty, to route cases to a call-center agent to solve a more complex problem.
These are just a few examples of how machine learning can bring intelligence to business environments. Clearly, machine learning has huge potential.
Follow me on Twitter (@JM_SAP) to learn more about machine learning!
Application Architect at SAP America
8 年I like this definition of Machine Learning by Tom M. Mitchell - "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E". What are the pros and cons of the choice of programming language (for example Python vs R) with respect to performance, ease of development etc? What is your preferred platform/language/library for development in machine learning?