Machine Learning: What it is and why it matters

Machine Learning: What it is and why it matters

Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.

The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new - but one that’s gaining fresh momentum.

Because of new computing technologies, machine learning today is not like machine learning of the past. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data – over and over, faster and faster – is a recent development. Here are a few widely publicized examples of machine learning applications that you may be familiar with:

  • The heavily hyped, self-driving car? The essence of machine learning.
  • Online recommendation offers like those from Amazon and Netflix? Machine learning applications for everyday life.
  • Knowing what customers are saying about you on Twitter? Machine learning combined with linguistic rule creation.
  • Fraud detection? One of the more obvious, important uses in our world today.

 Why the increased interest in machine learning?

Resurging interest in machine learning is due to the same factors that have made data mining and Bayesian analysis more popular than ever. Things like growing volumes and varieties of available data, computational processing that is cheaper and more powerful, and affordable data storage. 

All of these things mean it's possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results – even on a very large scale. The result? High-value predictions that can guide better decisions and smart actions in real time without human intervention.

One key to producing smart actions in real time is automated model building. Analytics thought leader Thomas H. Davenport wrote in The Wall Street Journal that with rapidly changing, growing volumes of data, "... you need fast-moving modeling streams to keep up." And you can do that with machine learning. He says, "Humans can typically create one or two good models a week; machine learning can create thousands of models a week."

How is machine learning used today?

Ever wonder how an online retailer provides nearly instantaneous offers for other products that may interest you? Or how lenders can provide near-real-time answers to your loan requests? Many of our day-to-day activities are powered by machine learning algorithms, including:

  • Fraud detection.
  • Web search results.
  • Real-time ads on web pages and mobile devices.
  • Text-based sentiment analysis.
  • Credit scoring and next-best offers.
  • Prediction of equipment failures.
  • New pricing models.
  • Network intrusion detection.
  • Pattern and image recognition.
  • Email spam filtering.

 What are some popular machine learning methods?

Two of the most widely adopted machine learning methods are supervised learning and unsupervised learning. Most machine learning – about 70 percent – is supervised learning. Unsupervised learning accounts for 10 to 20 percent. Semi-supervised and reinforcement learning are two other technologies that are sometimes used.

  • Supervised learning algorithms are trained using labeled examples, such as an input where the desired output is known. For example, a piece of equipment could have data points labeled either “F” (failed) or “R” (runs). The learning algorithm receives a set of inputs along with the corresponding correct outputs, and the algorithm learns by comparing its actual output with correct outputs to find errors. It then modifies the model accordingly. Through methods like classification, regression, prediction and gradient boosting, supervised learning uses patterns to predict the values of the label on additional unlabeled data. Supervised learning is commonly used in applications where historical data predicts likely future events. For example, it can anticipate when credit card transactions are likely to be fraudulent or which insurance customer is likely to file a claim.
  • Unsupervised learning is used against data that has no historical labels. The system is not told the "right answer." The algorithm must figure out what is being shown. The goal is to explore the data and find some structure within. Unsupervised learning works well on transactional data. For example, it can identify segments of customers with similar attributes who can then be treated similarly in marketing campaigns. Or it can find the main attributes that separate customer segments from each other. Popular techniques include self-organizing maps, nearest-neighbor mapping, k-means clustering and SVD-singular value decomposition. These algorithms are also used to segment text topics, recommend items and identify data outliers.
  • Semi-supervised learning is used for the same applications as supervised learning. But it uses both labeled and unlabeled data for training – typically a small amount of labeled data with a large amount of unlabeled data (because unlabeled data is less expensive and takes less effort to acquire). This type of learning can be used with methods such as classification, regression and prediction. Semi-supervised learning is useful when the cost associated with labeling is too high to allow for a fully labeled training process. Early examples of this include identifying a person's face on a web cam.
  • Reinforcement learningis often used for robotics, gaming and navigation. With reinforcement learning, the algorithm discovers through trial and error which actions yield the greatest rewards. This type of learning has three primary components: the agent (the learner or decision maker), the environment (everything the agent interacts with) and actions (what the agent can do). The objective is for the agent to choose actions that maximize the expected reward over a given amount of time. The agent will reach the goal much faster by following a good policy. So the goal in reinforcement learning is to learn the best policy.

 What's the difference between data mining, machine learning and deep learning?

The difference between machine learning and other statistical and mathematical approaches, such as data mining, is another popular subject of debate. In simple terms, while machine learning uses many of the same algorithms and techniques as data mining, one difference lies in what the two disciplines predict.

  • Data mining discovers previously unknown patterns and knowledge.
  • Machine learning is used to reproduce known patterns and knowledge, automatically apply that to other data, and then automatically apply those results to decision making and actions.

 The increased power of today's computers has also helped data mining techniques evolve for use in machine learning. For instance, neural networks have long been used in data mining applications. With more computing power, you can create neural networks with many layers. In machine learning lingo, these are called deep neural networks. It's the increased computing power that enables fast processing of many neural network layers for automated learning.

 Taking that a step further, artificial neural networks are a group of algorithms that are loosely based on our understanding of the brain. ANNs can – in theory – model any kind of relationship within a data set, but in practice getting reliable results from neural networks can be very tricky. Artificial intelligence research dating back to the 1950s has been punctuated by the successes, and the failures, of neural networks.

 Today, a new field of neural network research, known as deep learning, is having tremendous success in areas where many artificial intelligence approaches have failed in the past.

 Deep learning combines advances in computing power and special types of neural networks to learn complicated patterns in large amounts of data. Deep learning is a fast-growing area in machine learning research that has achieved breakthroughs in speech, text and image recognition. It’s based on endowing a neural network with many hidden layers, enabling a computer to learn tasks, organize information and find patterns on its own.

 Deep learning techniques are currently state-of-the-art for identifying objects in images and words in sounds. Researchers are now looking to apply these successes in pattern recognition to more complex tasks such as automatic language translation, medical diagnoses and numerous other important social and business problems.

 Machine learning algorithms and processes

 Algorithms

There are user interfaces help us to build machine learning models and implement an iterative machine learning process. You don't have to be an advanced statistician. The comprehensive selection of machine learning algorithms can help us quickly get value from big data, machine learning algorithms includes

  • Neural networks.
  • Decision trees.
  • Random forests.
  • Associations and sequence discovery.
  • Gradient boosting and bagging.
  • Support vector machines.
  • Nearest-neighbor mapping.
  • k-means clustering.
  • Self-organizing maps.
  • Local search optimization techniques
  • Expectation maximization.
  • Multivariate adaptive regression splines.
  • Bayesian networks.
  • Kernel density estimation.
  • Principal component analysis.
  • Singular value decomposition.
  • Gaussian mixture models.
  • Sequential covering rule building.

 Infrastructure, tools and processes

As we know by now, it’s not just the algorithms. Ultimately, the secret to getting the most value from your big data lies in pairing the best algorithms for the task at hand with:

  • Comprehensive data quality and management.
  • GUIs for building models and process flows.
  • Interactive data exploration and visualization of model results.
  • Comparisons of different machine learning models to quickly identify the best one.
  • Automated ensemble model evaluation to identify the best performers.
  • Easy model deployment so you can get repeatable, reliable results quickly.
  • An integrated, end-to-end platform for the automation of the data-to-decision process.

 Machine learning experience and expertise

With my consultancy, we continuously searching for and evaluating new approaches, implementing the statistical methods best suited to solving the problems you face. Combine our rich, sophisticated heritage in statistics and data mining with new architectural advances to ensure your models run as fast as possible - even in huge enterprise environments. I understand that quick time to value means not only fast, automated model performance, but also time not spent moving data between platforms – especially when it comes to big data. High-performance, distributed analytical techniques take advantage of massively parallel processing integrated with all major assorted databases.

 

 

 

 

Tony Khoury

General Manager at Rahi

6 年

Machine learning?is such an interesting topic, I really enjoyed reading that.

回复

要查看或添加评论,请登录

Nirupam S D的更多文章

社区洞察

其他会员也浏览了