As a result of my article “The AI Innovation to Come”, it was suggested that I create a guide for “mere mortals” that explains what enables AI systems to grow in intelligence. And, perhaps, where it’s heading from here. ?
In its simplest form, there are four dimensions of techniques that enable AI “intelligence” by defining how they:
- train for their role in life.
- learn to be more accurate.
- reason to make decisions.
- perfect their trade.
As sophistication grows in each of these four dimensions, it creates a multiplier effect of greater predictive powers.? Furthermore, the vast majority of today’s AI for business is still in its infancy phase. Understanding the various techniques in each dimensions provides insights into what is yet to come.
Training
This dimension defines the AI model’s role in life and trains it accordingly.
- Trains with KNOWN data and KNOWN outcomes – These models are fed examples (data) where the correct outcome is known. You know what you are looking from historical evidence. They take a labeled set of inputs and produce a labeled set of outputs with explicit direction on how to do so (its supervised). ?For example, predict the optimal selling price for each of the 1,000 cars on my sales lot.
- Trains with UNKNOWN data and UNKNOWN outcomes – This method isn’t feed labeled data, nor told what the correct outcomes are. Essentially, you don’t know what you are looking for because you don’t understand the data it feeds on. These models are trained to swim in a lake of data to make sense of it all without explicit direction (its unsupervised). ?For example, predict an outcome of a legal case.
- Trains with NO DATA and SELF-DETERMINES outcomes – This is the “dude, you on your own” method. The model self-trains by performing tasks without being fed a dataset nor given guidance. Through trial and error (and experimentation) it evaluates the consequences of its actions and determines the best behavior. Sound familiar, mere mortals? ?For example, an autonomous drone that take aerial photos of wildlife.
Fun Fact:? 30% of ML uses unsupervised training, up from 10% one year ago.
Learning
?This dimension defines how an AI model learns by iteratively improving itself.
- Learns from a SINGLE sequence of BINARY decisions – These models progress through a tree-like structure of yes/no decision-points to create a final prediction. Think of a decision tree where each decision is a “leaf” and each path forward a “branch”. They adapt their structures by evaluating which paths produce the most predictive powers. For example, advising police how to safely respond to a crime in action.
- Learns from MULTIPLE sequences of BINARY decisions – This model operate as a forest of decision trees that partner to create a prediction. Each tree takes a randomized subset of the input data to create sub-prediction, which are combined to create a final prediction. This results in better predictions, especially with very large datasets. For example, predicting a storm track’s effect on traffic five days in advance.
- Learns from LAYERS of interconnected VARIABLE decisions – This approach mimics the human brain that is composed of vast layers of interconnected “neurons” that process and propagate information in dynamic ways. Each “neuron” transforms its input data and activates the next best “neuron”, and so on and so forth, until an outcome is learned. For example, generate a written legal strategy document.
Fun Fact: Only 20% of AI uses neural networks (#3), which will double in 2024. ?
Reasoning
?This dimension enables AI models to reason and to make judgements.
- Reasons to predict the WHAT – This approach derives its predictive powers from correlating variables in a dataset, telling us how much one changes when others change. They can deduct specific observations from generalized information or induce general observations from specific information.? Essentially, deriving what will happen given certain conditions. For example, to define the risk of certain anomalies in IT operations.
- Reasons to predict the HOW – This is a superset of correlation that can handle more complex and diverse problems. It used statistical transformations and large multifeatured datasets to create a view of not only what will happen but “how” it knows it will happen. That is, it identifies influential factors (or conditions) that correlative models cannot. ?For example, explaining how to win a demographic voting group in an election.
- Reasons to predict the WHY – Causal AI explores the dynamics of “why” things happen, what can be done to change things, and the consequences of interventions (cause & effect). Humans are causal by nature, so AI needs to become causal if it wishes to collaboratively reason, explain, and make decisions with humans. For example, what could I have done [or should do] differently to retain my customers given certain conditions?
Fun Fact: The causal AI market will grow 41% CAGR, nearly 2x traditional AI.
Perfecting
This dimension relates to the lifeblood of AI, its datasets, and the key consideration involved in perfecting outcomes:
- ?The unreasonable EFFECTIVENESS of data – How do I weigh the relative importance of developing the AI model versus the dataset it feeds upon. It's been shown that the performance improvements related to algorithmic sophistication are relatively small compared improvements in the dataset (Peter Norvig, 2009).
- The wisdom of CROWDS – Do I use source all available knowledge or just a subset considered to be the best?? The former risks introducing greater disorder or randomness. The latter risks esotericism, the secret knowledge of a few. It's been shown that diversity over superiority improves outcomes (Surowiecki, 2004).
- The capture of INTUITION – Is the capture of explicit knowledge enough?? We know humans tap into tacit knowledge and intuition to perfect tasks. ?We also know these innate characteristics cannot be expressed and are hard to externalize as rules (Polanyi, 1966). Addressing is paradox can create enormous differentiation.
- The CAUSALITY phenomenon – How do I minimize false-positives? ?We know an association or trend that appears in groups of data can change or disappear when the datasets are combined due to improper causal interpretations (Simpson, 1951). This highlights the importance of adopting statistical and causal AI models.
?Fun Fact: 80% of failed projects are due to datasets, not AI algorithmic design.
Collectively, it’s clear that as sophistication grows in each of these four dimensions, AI systems will experience a multiplier effect of greater intelligence.
This is why, among many things, we are truly in the infancy of AI’s impact.
Founded Doctor Project | Systems Architect for 50+ firms | Built 2M+ LinkedIn Interaction (AI-Driven) | Featured in NY Times T List.
1 年Sounds like a fascinating read! Can't wait to dive into it! ??
Senior Marketing & Business Strategy Leader | Global Executive with Experience in AI and Marketing Innovation | Based in North America
1 年Thank you! Very helpful.