AI is from Venus, Machine Learning is from Mars

AI is from Venus, Machine Learning is from Mars

The rise of cloud computing brings with it the promise of infinite computing power. The rise of Big Data brings with it the possibility of ingesting all the world’s log files. The combination of the two has sparked widespread interest in data science as truly the “one ring to rule them all.” When we speculate about such a future, we tend to use two phrases to describe this new kind of analytics—artificial intelligence (AI) and machine learning. Most people use them interchangeably. This is a mistake.

AI develops conceptual models of the world that are underpinned by set theory and natural language. In this context, every noun or noun phrase represents a set. Every predicate implicates that set in other sets. If all human beings are mortal, and you are a human being, then you are mortal. It’s an exercise in Venn diagrams. By extending these diagrams through syntax, semantics, and analogy, human beings build up conceptual models of the world that enable us to develop strategies for living. AI seeks to emulate this capability in expert systems.

Results in this area to date have been mixed and overall have fallen far short of the aspirations and projections of earlier decades. The problem is that natural language sets are inherently fuzzy at the edges. As a result, set algebra that works very well when applied to things that are near the centerpoint of a set becomes increasingly challenged as set elements approach their boundary conditions. Olympic sports clearly includes track and field and swimming. Does it include badminton? ping pong? golf? baseball? bowling? billiards? chess? Any claim you make about Olympic sports must be increasingly modified as you expand the aperture of your focus to accommodate more and more borderline cases. Ultimately such a system simply cannot scale. It can still be incredibly valuable, particularly in constrained domains like medical diagnosis, but it is not the ring we are looking for.

Machine learning is. Or at least it appears to be at the present time. Unlike AI which seeks to understand the world through conceptual models, machine learning has no such interest. It does not understand anything at all, nor does it want to. That’s because it does not seek emulate human intelligence, it seeks to simulate it. It does so through sheer brute mathematical force. Basically, any digital thing you present to a machine learning engine—say, a photographic image or a body of text—is converted into a string of integers, and everything that happens after that is some type of mathematical manipulation of that string. In the world of machine learning you and I really are just numbers.

The underlying hypothesis of machine learning as applied to log files is that correlation can serve as a proxy for causation. That is, when things happen in a time sequence, the stuff before can predict the stuff after, and vice versa. Now, it turns out this is a pretty weak hypothesis, so the initial “predictions” of machine learning are typically pretty awful. But bad as they are, they do represent a start. And as each erroneous conclusion is detected, it is used to revise the algorithm through a process called back propagation, that rewrites the algorithm so that it will produce the right answer, both for the instance being corrected and all other prior instances processed. In this way, the algorithms of correlation get better and better, and eventually the patterns of correlation begin to approximate the realities of causation. Of course, it takes an enormous of amount of computing and an equally enormous amount of data to accomplish such a result, but here’s the thing—the algorithms never get dumber, they always get smarter. Sooner or later a ratchet effect kicks in, and eventually Deep Blue beats Garry Kasparov at chess.

Despite such amazing accomplishments, however, at no point in this process does machine learning “understand” anything, which means there is no underlying conceptual model one can abstract from their operation, which in turn means there is nothing a human being can “learn” from machine learning. Its algorithms are not the well-formed symbolic equations with which people think. They are much more like the genetic algorithms of DNA—full of all sorts of junk, that spontaneously and seemingly miraculous spawn exquisite creations of order without any discernible blueprint. It is the absolute opposite of anything one would call “intelligent design.” These programs aren’t written from the top down; they are written from the bottom up. Do not think of them as descending from the cerebral cortex. Imagine instead that they ascend from the cerebellum. And indeed this may be how mind and brain actually do interoperate. 

In any event, it is machine learning, not AI, that we need to focus on for the foreseeable future. To further crystalize this distinction, there is a really interesting, albeit pretty geeky, analogy we can use. It has to do with a computer science problem called the “P versus NP problem.” Here is the relevant paragraph from Wikipedia:

The general class of questions for which some algorithm can provide an answer in polynomial time is called "class P" or just "P". (Note: polynomial, here, is being contrasted with exponential, the point being that you can run a P program for a finite period, and it will generate an answer.) For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is called NP, which stands for "nondeterministic polynomial time."

OK, like I said, pretty geeky. But the key idea is that P class algorithms are finite calculations that correspond to any program a human being might write. AI programs belong to this class. Machine learning algorithms, by contrast, belongs to the class of NP. They evolve through empirical verification not logical analysis. That is, by feeding them more and more problems with known answers, they get better and better at getting the next problem right. But in so doing they are simulating intelligence, not emulating it. AI emulates human intelligence and is P. Machine learning simulates it and is NP. They are as different as chalk and cheese.

For most of my career in high tech, what we have called analytics has been focused on artifacts from the domain of P. This is the world of report-writers, spreadsheets, statistical analysis, data visualization, business intelligence, expert systems, and the like—all of which run in finite time, all of which entail logical analysis. But the future of analytics is largely NP—predictive maintenance, behavioral targeting, fraud detection, network optimization, and their ilk. They evolve through verification, not logical analysis. It is a whole new world. 

NP artifacts are in a very real sense black boxes, and that makes us nervous, and rightfully so. How much trust can we put in something that we cannot directly understand? This anxiety represents a very real chasm business leaders have to cross. The early adopters are already there, and they are using machine learning to eviscerate their competition. Held back by its understandable concerns, the pragmatist majority has yet to respond. But that puts its very future at risk. It must find a way to make its peace with machine learning.

Perhaps it can take consolation in one last analogy. Life itself is filled with NP processes. Just watch your two-year old learning to speak. He or she has no theory of language—they are just experimenting with output that gets corrected through verification. It is only after four or five years that the mind develops logical control over the process. Or consider the amazing progress we have made in molecular biology, the extraordinary expansion of our understanding of how DNA and RNA and protein synthesis drive the operations of every cell in our body. Despite this extraordinary progress in logical analysis, no one as yet has ever come close to being able to create a single cell—any cell—from scratch. We take the daily operation of the 15 trillion or so cells in our body for granted because, frankly, we have no alternative. It’s all an NP process. Sometimes in life you just have to accept things as given.

That’s what I think. What do you think?

______________________________________________________________________

Geoffrey Moore | Zone to Win | Geoffrey Moore Twitter | Geoffrey Moore YouTube

Kamesh P.

Full stack marketing | Cloud | AIML | FinOps

6 年

It is not possible to create a single cell—any cell—from scratch, If by "scratch" you mean "nothing", you can't make something out of nothing.?

回复
Ali Berkay Ozkose

Digital Innovation Lead, EMEA

6 年

A nice article also covering the answer to the question on "AI has been there for many years, why becoming so popular now?"

David Kopf

Sabbatical - Doctoral Candidate at University of Phoenix College of Doctoral Studies

6 年

If I have this right, In AI it seems you could set a value to humans. I Asimov's "I Robot" the prime imperative is to not harm humans. Machine learning would not have such a value set.

回复

Good article. It clarified to me the difference between AI and ML.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了