Data is not Understanding
While A.I. seems to have only recently captured the attention, the reality is that A.I. has been around for over 60 years. In the late 1950’s, Arthur Samuel wrote a checkers playing program that could learn from its mistakes and thus, over time, became better at playing the game. MYCIN, the first rule-based expert system, was developed in the early 1970’s and was capable of diagnosing blood infections based on the results of various medical tests.
Machine Learning
The machine learning paradigm can be viewed as “programming by example”. Two types of learning are used: supervised and unsupervised. In supervised learning, a collection of labeled patterns is provided, and the learning process is measured by the quality of labeling a newly encountered pattern. The labeled patterns are used to learn the descriptions of classes which in turn are used to label a new pattern. In the case of unsupervised learning, the problem is to group a given collection of unlabeled patterns into meaningful categories.
Machine learning is applied in various fields such as risk management, cyber security, computer vision, speech recognition, NLP, web search, biotech, and many others.
This paper examines the main A.I. and machine learning techniques and their limitations.
Neural Network
A neural network is based on a dramatically oversimplified model of biological neurons. A neural network consists of many simple elements called artificial neurons, each producing a sequence of activations. The number of elements and their interconnections are orders of magnitude fewer than the number of neurons and synapses in the human brain.
Backpropagation (BP) [Rumelhart, 1986] is the most popular supervised neural network learning algorithm. Backpropagation is organized into layers and connections between the layers. The input layer, the output layer and the middle layers called hidden layers. The goal of backpropagation is to compute the gradient (a vector of partial derivatives) of an objective function with respect to the neural network parameters. Input neurons activate through sensors perceiving the environment and other neurons activate through weighted connections from previously active neurons. Each element receives numeric inputs and transforms this input data by calculating a weighted sum over the inputs. A non-linear function is then applied to this transformation to calculate an intermediate state. While the design of the input and output layers of a neural network is straightforward, there is an art to the design of the hidden layers. Designing and training a neural network requires choosing the number and types of nodes, layers, learning rates, training data, and test sets.
Deep Learning
Recently deep learning, an overly hyped term that describes a set of algorithms that use a neural network as an underlying architecture, has generated thousands of headlines. The earliest deep learning-like algorithms possessed multiple layers of non-linear features and can be traced back to Ivakhnenko and Lapa in 1965. They used thin but deep models with polynomial activation functions which they analyzed using statistical methods. Deep learning became more usable in recent years due to the availability of inexpensive parallel hardware (GPUs, computer clusters) and massive amounts of data. Deep neural networks learn hierarchical layers of representation from the input to perform pattern recognition. When the problem exhibits non-linear properties, deep networks are computationally more attractive than classical neural networks. A deep network can be viewed as a program in which the functions computed by the lower-layered neurons are subroutines. These subroutines are reused many times in the computation of the ?nal program.
Limits of Deep Learning
Despite the great results achieved by deep neural networks in speech and image recognition, language translation, and other applications resulting in sensational media coverage which led the average citizen of the world to believe that these algorithms are sentient. However, deep learning has inherent restrictions which limit its application and effectiveness in many industries and fields.
Deep learning requires huge amount of labeled data and significant time to design and train.
Deep learning algorithms are black-boxes; they don’t have any understanding of their input, they have zero interpretability and cannot provide any explanation. How can anyone trust a system that does not explain or justify their conclusions?
Another limitation is minimal changes can induce big errors. For example, in vision classification, slightly changing an image which was once correctly classified in a way that is imperceptible to the human eye can cause a deep neural network to label the image as something else entirely.
If you have, even an entry level, of expertise in the field of AI you know that deep learning algorithms cannot handle anything that requires reasoning, no matter how much data you train them with as they rely only on pattern recognition, with absolutely zero understanding. Even learning the basic sorting algorithm is extremely challenging for a deep neural network. Can vectors of weights (numbers between 0 and 1) represent learning, thinking or understanding? Of course not!..they may be close to a mechanical parrot!
Further examples of these limitations are presented by Patrick Henry Winston’s deep learning class, the former director of the MIT Artificial Intelligence Laboratory and an Artificial Intelligence professor at the MIT.
Patrick H Winston MIT Deep Neural Nets Lecture
Additional examples of the limitations of deep learning are explained in a research paper from Cornell and Wyoming Universities entitled “Deep Neural Networks are Easily Fooled”.
Another interesting article is “Deep Learning Isn’t a Dangerous Magic Genie. It’s Just Math” from Oren Etzioni, a professor of Computer Science and head of the Allen Institute for Artificial Intelligence.
Data Mining
Data mining, or knowledge discovery in databases, is the nontrivial extraction of implicit, previously unknown and potentially useful information from data. Statistical methods are used that enable trends and other relationships to be identified in large databases.
The major reason that data mining has attracted attention is due to the wide availability of vast amounts of data, and the need for turning such data into useful information. The knowledge gained can be used for applications ranging from risk monitoring, business management, production control, market analysis, engineering, and science exploration.
In general, three types of data mining techniques are used: association, regression, and classification.
Association analysis
Association analysis is the discovery of association rules showing attribute-value conditions that occur frequently together in a given set of data. Association analysis is widely used to identify the correlation of individual products within shopping carts.
Regression analysis
Regression analysis creates models that explain dependent variables through the analysis of independent variables. As an example, the prediction for a product’s sales performance can be created by correlating the product price and the average customer income level.
Classification and prediction
Classification is the process of designing a set of models to predict the class of objects whose class label is unknown. The derived model may be represented in various forms, such as if-then rules, decision trees, or mathematical formulas. A decision tree is a flow-chart-like tree structure where each node denotes a test on an attribute value, each branch represents an outcome of the test, and each tree leaf represents a class or class distribution. Decision trees can be converted to classification rules. Classification can be used for predicting the class label of data objects. Prediction encompasses the identification of distribution trends based on the available data.
The data mining process consists of an iterative sequence of the following steps:
1. Data coherence and cleaning to remove noise and inconsistent data.
2. Data integration such that multiple data sources may be combined.
3. Data selection where data relevant to the analysis are retrieved.
4. Data transformation where data are consolidated into forms appropriate for mining.
5. Pattern recognition and statistical techniques are applied to extract patterns.
6. Pattern evaluation to identify interesting patterns representing knowledge.
7. Visualization techniques are used to present mined knowledge to users.
Limits of Data Mining
GIGO (garbage in garbage out) is almost always referenced with respect to data mining, as the quality of the knowledge gained through data mining is dependent on the quality of the historical data. We know data inconsistencies and dealing with multiple data sources represent large problems in data management. Data cleaning techniques exist to deal with detecting and removing errors and inconsistencies from data to improve data quality. However, detecting these inconsistencies is extremely difficult. How can we identify a transaction that is incorrectly labeled as suspicious? Learning from incorrect data leads to inaccurate models.
Data mining extracts knowledge limited to the specific set of historical data. This limits one’s ability to benefit from new trends. Because the decision tree is trained specifically on the historical data, it does not account for personalization within the tree. Additionally, data mining (decision trees, clusters) are non-incremental and do not adapt to new trends.
Business Rule Management System
A business rule management system (BRMS) enables companies to easily define, deploy, monitor, and maintain new regulations, procedures, policies, market opportunities, and workflows. One of the main advantages of business rules is that they can be written by business analysts without the need of IT resources. Rules can be stored in a central repository and can be accessed across the enterprise.
Limits in Business Rule Management Systems
Let's consider a financial institution using Business Rules to detect suspicious transactions. Fraud experts will need to divide the population into categories, and then write rules for identifying fraud within each category. This approach has two significant limitations. First, there may be many different types of individual behavior within each category. Broadly applying category-specific rules in these cases will result in poor detection and high false positive rates. Second, rules are "hard-wired" into the system. As a result, they cannot adapt to ever-changing fraud schemes or data shifts. Systems based on Business Rules are outdated almost as soon as they are implemented.
Case-Based Reasoning
Case-based reasoning (CBR) is a problem solving paradigm that is different from other major A.I. approaches. CBR learns from past experiences to solve new problems. Rather than relying on a domain expert to write the rules or make associations along generalized relationships between problem descriptors and conclusions, a CBR system learns from previous experience in the same way a physician learns from his patients. A CBR system will create generic cases based on the diagnosis and treatment of previous patients to determine the disease and treatment for a new patient. The implementation of a CBR system consists of identifying relevant case features. A CBR system continually learns from each new situation. Generalized cases can provide explanations that are richer than explanations generated by chains of rules.
Limits of CBR
The most important limitations relate to how cases are efficiently represented, how indexes are created, and how individual cases are generalized.
Fuzzy Logic
Traditional logic typically categorizes information into binary patterns such as, black/white, yes/no, or true/false. Fuzzy logic brings a middle ground where statements can be partially true and partially false to account for much of day-to-day human reasoning. For example, stating that a tall person is over 6' 2", traditionally means that people under 6' 2" are not tall. If a person is nearly 6' 2", then common sense says the person is also somewhat tall. Boolean logic states a person is either tall or short and allows no middle ground, while fuzzy logic allows different interpretations for varying degrees of height.
Neural networks, data mining, CBR, and business rules can benefit from fuzzy logic. For example, fuzzy logic can be used in CBR to automatically cluster information into categories which improve performance by decreasing sensitivity to noise and outliers. Fuzzy logic also allows business rule experts to write more powerful rules. Here is an example of a rule that has been rewritten to leverage fuzzy logic.
When the number of cross border transactions is high and the transaction occurs in the evening then the transaction may be suspicious.
Genetic Algorithms
Genetic algorithms work by simulating the logic of Darwinian selection where only the best performers are selected for reproduction. Over many generations, natural populations evolve according to the principles of natural selection. A genetic algorithm can be thought of as a population of individuals represented by chromosomes. In computing terms, a genetic algorithm implements the model of computation by having arrays of bits or characters (binary string) to represent the chromosomes. Each string represents a potential solution. The genetic algorithm then manipulates the most promising chromosomes searching for improved solutions. A genetic algorithm operates through a cycle of three stages:
1. Build and maintain a population of solutions to a problem
2. Choose the better solutions for recombination with each other
3. Use their offspring to replace poorer solutions.
Genetic algorithms provide various benefits to existing machine learning technologies such as being able to be used by data mining for the field/attribute selection, and can be combined with neural networks to determine optimal weights and architecture.
Next Generation, Artificial Intelligence and Machine Learning
The next generation of artificial intelligence (AI) systems will increasingly impact our lives by making personalized decisions on our behalf. AI has moved from research labs to real-world applications thanks to the unprecedented levels of data availability combined with advances in computational power and the new “factory type” AI platform allowing broad accessibility of these technologies to non-AI experts.
From cyber defense, banking, IoT, autonomous driving, biotech, to robot assisted surgery, Adaptive AI will be at the center of these mission critical applications that are essential to our well-being.
Next generation AI must be smart, self-learning and adaptive
These mission critical applications will require adaptive and self-learning AI that provides real-time insight in dynamic environments even with the presence of adversaries and unexpected inputs and events. Unfortunately, most of today’s AI systems, including the hyped “deep neural networks”, which is being marketed as “deep learning”, lack the essential feature of adaptive learning.
SMART-AGENTS TECHNOLOGY
Researchers have explored many different architectures for intelligent systems: neural networks, genetic algorithms, business rules, Bayesian network, and data mining, to name a few. We will begin by listing the most important limits of legacy machine learning techniques and will then describe how the next generation of artificial intelligence based on smart-agents overcomes these limitations.
As mentioned earlier, Current A.I. and machine learning technologies suffer from various limits. Most importantly, they lack the capacity for:
1. Personalization: To successfully protect and serve customers, employees, and audiences we must know them by their unique and individual behavior over time and not by static, generic categorization.
2. Adaptability: Relying on models based only on historical data or expert rules are inefficient as new trends and behaviors arise daily.
3. Self-learning: An intelligent system should learn overtime from every activity associated to each specific entity.
To further illustrate the limits, we will use the challenges of two important business fields: network security and fraud prevention. Fraud and intrusion are perpetually changing and never remain static. Fraudsters and hackers are criminals who continuously adjust and adapt their techniques. Controlling fraud and intrusion within a network environment requires a dynamic and continuously evolving process. Therefore, a static set of rules or a machine learning model developed by learning from historical data have only short-term value.
In network security, we know every day dozens of new malware programs with ever more sophisticated methods of embedding and disguising themselves appear on the internet. In most cases after vulnerabilities are discovered, a patch is released to address the vulnerability. The problem is it is often easy for hackers to reverse engineer the patch and therefore another defect is found and exploited within hours of the release of the given patch. Many well-known malware (Conficker is an example) exploit vulnerabilities for which there is a known patch. They use the fact that, for a variety of reasons, the patch is not deployed on vulnerable systems, or is not deployed in a timely manner leaving open targets. The attack in the fall of 2009 against Google and several other companies originating in China, called Aurora, was an example of exploitable dangling pointers in a Microsoft browser, which had previously not been discovered.
Tools that autonomously detect new attacks against specific targets, networks or individual computers are needed. It must be able to change its parameters to thrive in new environments, learn from each individual activity, respond to various situations in different ways, and track and adapt to the specific situation/behavior of every entity of interest over time. This continuous, one-to-one behavioral analysis, provides real-time actionable insights. In addition to the self-learning capability, another key concept for the next generation of A.I. and ML systems is being reflective. Imagine a plumbing system that autonomously notifies the plumber when it finds water dripping out of a hole in a pipe and detects incipient leaks.
Collective Intelligence with Hybrid AI models and Smart-Agents
Smart-Agents is the only technology that has the ability to overcome the limits of the legacy machine learning technologies. Smart-Agents technology is a personalization technology that creates a virtual representation of every entity and learns/builds a profile from the entity’s actions and activities. In the payment industry, for example, a smart-agent is associated with each individual cardholder, merchant, or terminal. The smart agents associated to an entity (such as a card or merchant) learns in real-time from every transaction made and builds their specific and unique behaviors overtime. There are as many smart agents as active entities in the system. For example, if there are 200 million cards transacting, there will be 200 million smart agents instantiated to analyze and learn the behavior of each. Decision-making is thus specific to each cardholder and no longer relies on logic that is universally applied to all cardholders, regardless of their individual characteristics. The smart agents are self-learning and adaptive since they continuously update their individual profiles from each activity and action performed by the entity.
Let’s use some examples to highlight how the Smart-Agents technology differs from legacy machine learning technologies.
In an email ?ltering system, smart agents learn to prioritize, delete, forward, and email messages on behalf of a user. They work by analyzing the actions taken by the user and by learning from each. Smart agents constantly make internal predictions about the actions a user will take on an email. If these predictions prove incorrect, the smart agents update their behavior accordingly.
In a ?nancial portfolio management system, a multi-agent system consist of smart agents that cooperatively monitor and track stock quotes, ?nancial news, and company earnings reports to continuously monitor and make suggestions to the portfolio manager.
Smart agents do not rely on pre-programmed rules and do not try to anticipate every possible scenario. Instead, smart agents create profiles specific to each entity and behave according to their goals, observations, and the knowledge that they continuously acquire through their interactions with other smart agents. Each Smart agent pulls all relevant data across multiple channels, irrespectively to the type or format and source of the data, to produce robust virtual profiles. Each profile is automatically updated in real-time and the resulting intelligence is shared across the smart agents. This one-to-one behavioral profiling provides unprecedented, omni-channel visibility into the behavior of an entity.
Smart agents can represent any entity and enable best-in-class performance with minimal operational and capital resource requirements. Smart agents automatically validate the coherence of the data, perform the features learning, data enrichment as well as one-to-one profiles creation. Since they focus on updating the profile based on the actions and activities of the entity, they store only the relevant information and intelligence rather than storing the raw incoming data they are analyzing, which achieves enormous compression in storage.
Legacy technologies in machine learning generally relies on databases. A database uses tables to store structured data. Tables cannot store knowledge or behaviors. Artificial intelligence and machine learning systems requires storing knowledge and behaviors. Smart-Agents bring a powerful, distributed file system specifically designed to store knowledge and behaviors. This distributed architecture allows lightning speed response times (below 1 millisecond) on entry level servers as well as end-to-end encryption and traceability. The distributed architecture allows for unlimited scalability and resilience to disruption as it has no single point of failure.
In conclusion, a comprehensive intelligent solution must combine the benefits of existing artificial intelligence and machine learning techniques with the unique capabilities of Smart-Agents technology. The result is a comprehensive solution that is intelligent, self-learning and adaptive.
MBA, IT manager
4 年Thank you for sharing. Another paradigm in unsupervised classification used in AI is genetic algorithm. When model is well defined we can reach a good level in problem solving where the parameters are not known in advance. ?