Artificial Intelligence vs. Machine Learning

Artificial Intelligence vs. Machine Learning

 

In the last year, I’ve been wrestling with two very different and competing approaches with respect to how to leverage big data: artificial intelligence and machine learning.  After all, there’s nothing exciting about big data unless you have a mechanism to process it. After a few weeks of intensive research, I have realized that everyone’s definition of everything is at odds with everyone else, and the ability to construct a meaningful dialogue around these two technologies is stymied by both neo-religious imperatives as well as philological differences that are rooted in etymological psychosis. And I can agree with you, dear reader, after reading that last sentence, that the difference between two is not just pathologically insipid, but boring as well.  Consequently, I will try to break down how I see the world unfolding in the next ten years in a language that does not render itself beholden to either vendor frenzy or academic delight. You’re welcome.

But first we must texture the conversation with some background. Let’s start with definitions as they relate to big data:

AI:

Ai’s goal is to understand neural function and imitate that ability of the brain to learn from a given set of circumstances. The brain is basically, in many cases, a Rube Goldberg machine. We have this limited set of evolutionary building blocks that we are able to use to get to a sort of very complex end state.

Back in the 1980s, artificial intelligence was a popular field; the principal objective was to figure out how experts do what they do, reduce those tasks to a set of rules, then program computers with those rules, effectively replacing the experts. Researchers wanted to teach computers to diagnose disease, translate languages, to even figure out what we wanted but didn’t know ourselves. From a layman’s perspective, the goal of AI is to create a system that is capable of coming up with solutions to problems on its own. Fundamentally, AI seeks to construct the ability to reason within a system.

It didn’t work.

Classical Artificial Intelligence absorbed hundreds of millions of Silicon Valley VC dollars before being declared a failure. Though it wasn’t clear at the time, the problem with AI was we just didn’t have enough computer processing power at the right price to accomplish those ambitious goals. But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.

 

 Machine Learning:

Machine learning is actually a group of technologies that includes computational statistics, algorithm design, and mathematics that try to cull data in order to identify patterns that can be translated into knowledge.  A primary set of instructions is provided to a system that then generalizes that information from large data sets and finds/extrapolates patterns in order to apply that information to new solutions.

Machine learning evolved from artificial intelligence.  Once the challenges of AI became extant (and to some degree insurmountable), theorists and mathematicians looked for a more tailored approach to inductive compute decision making. There are various permutations to this model. Vendors have labeled their systems as “machine intelligence” or “supervised learning.”  For the exceptionally nerdy, feel free to investigate Bayesian networks.  

Besides being easier to construct, the benefits of machine learning are explicit when it comes to implementation and deployment.  Fundamentally, machine learning starts with a defined problem and a set of rules that describe appropriate analysis of a given set of data. 

 

How they are similar:

  • Iterative algorithms: Although the mechanisms differ, iterative learning is a key component of both ML and AI. In ML, iterative learning is the process of defining uncertain boundaries from a described set of parameters. In AI, iterative learning takes place through non-linear and typically time-varying sequences that manage input for describing an aspirational trajectory.
  • More data, faster iterations: The system itself constructs and/or optimizes algorithmic models from learning.
  • Applied where explicit algorithms are unfeasible (spam filtering, OCM, computer vision). Essentially, when the definition of an object, and act, or anything else cannot be explicitly described mathematically, a more complex set of instructions (usually probabilistic modeling) are required for analysis and understanding.
  • Both systems rely upon “learning to learn” based in inductive reasoning: Again, the process differs greatly between the two, but the net result is rooted in the ability of a system to learn from experience. That learning can be explicit and guided (typically machine learning), or it can be based on a set of a priori knowledge and extrapolated by the system itself (typically artificial intelligence).

 

The case for AI:

  • The penultimate goal of AI is reasoning. This means that not only must the system identify what needs to be computed, but also how it must be computed, even under uncertainty.  A tall order; I know a lot of people that lack this capability.
  • Neural network based: The most common solution mechanism for the implementation of AI is the artificial neural network. I’m not going to describe it in any detail; if you don’t know what it is, you have no business reading this blog.  Suffice it to say that interconnected neurons at various levels of abstraction weigh the value of observed or calculated behaviors in order to tune the algorithm driving the system through the use of non-linear functions of the inputs.
  • Learning at different levels of abstraction: AI algorithms have a number of parameterized transformations that a signal encounters as it propagates from the input layer to the output layer. The (parameterized) transformation is a processing unit that has trainable considerations, such as weights and thresholds. The number of layers is a function of the complexity of the network, and the weight of the parameters describes the logic behind it.
  • Looking for shifts in variance in order to recognize change: In neural networks, there is a mechanism to recognize the actual shift in variance as well as the magnitude of the change. Ultimately, this is one of the cornerstones of AI; the ability to quantify variance and extrapolate its effect is fundamental to the premise of AI

 

Machine Learning:

  • Machine learning focuses on the prediction of unknown properties based upon known properties, which in turn are based upon probability distribution. What this infers are two key objectives:
    • The goal of machine learning is problem solving; essentially, identifying and/or solving a problem using a given set of data with defined inputs and supervised error back-propagation
    • Machine learning is not looking for the ability to think; rather, it is looking for the ability to do what a thinking entity can do.
  • Ultimately, there are two class of functions as it relates to machine learning: something that can be learned and something that cannot be learned. This is usually established through the level of complexity associated with the feed-forward control system, which is responsible for instantiating a control signal from the source to the external environment. This, of course, suggests a level of understanding around not a system disturbance, but the anticipated change in a system based upon mathematical modeling. Feedback systems, conversely, change control signals reactively, as they are rooted in environmental responses to signals. An inability to rationalize behavior despite these systems would render a function as something that cannot be learned and outside the capability of a machine learning environment.

 

One interesting caveat as it relates to machine learning is the debut of quantum machines to the learning process. Quantum computing theoretically provides an infinite number of feed-forward systems for a given environment.  One can deduce that quantum learning is not deterministic in nature; the fundamental compute elements of quantum computing (qubit) creates a logic model that is fuzzy.  This may provide the most creative problem solving process under the known laws of physics.  Currently, it is being used for image recognition, but future implementations consider training probabilistic models upon intractable calculations, which ultimately requires coarse approximations. 

However, in either case:

  • You end up with a result that is optimized. You don’t necessarily understand how you evolved it, but the results speak for themselves. You can only understand the process and the interfaces.
  • Locus of learning shifts from product to process: you don’t tweak the end product; you tweak to process. In a sense, you are searching for the equity of irreducibly; since you cannot describe the elemental state of an entity, seek to leverage its effect upon the system though mechanistic means.
  • Finally, both of these models go beyond engineering: we attempt to create more than we can understand.

So, where to invest? Ultimately, it depends upon your business need.  Machine learning offers a number of business-friendly tools that make it the go-to solution for most corporate efforts: it can be bound by known technologies; it can solve specific problems, and problems that emerge can be redressed through tweaking either the inputs, the data organization, or the value of the outputs.  In fact, these are the reasons AI fell out of favor back in the 80s: there was no way to abstract the ability of an expert into a set of rules that could be extrapolated to more complex scenarios.

However, I believe that the future is AI.  The ability to reason holds an intellectual and emotional capacity that supersedes the business imperative.  It is what we as programmers and theorists and scientists have struggled for since the founding of statistical analysis; how do we create a machine that can answer questions about the stuff we don’t know?  IBM has invested heavily in the space, as has Microsoft, Tesla, Oracle, Google, every healthcare company upon the planet, most of the banks, and every tech savvy entrepreneur that is looking to figure out how to make more money than the next guy. 

If you’re a business, invest in machine learning.  It will do you good.  But if you’re a visionary, go for artificial intelligence.  You may change the world. And I’ll be there to cheer you on.

 

Jamal Khawaja
Follow me on Twitter or Facebook.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了