Artificial Intelligence vs Machine Learning - What is the difference?

Artificial Intelligence vs Machine Learning - What is the difference?

Both artificial intelligence and machine learning - often shortened to AI and ML - are hot buzzwords. What are they and how do they differ?

No alt text provided for this image



This piece puts a lens on these two topics to figure out what each represents and what the difference between the two is.

Disclaimer: There are some grey areas in terms of some definitions, so in those cases, I will lean towards my own informed opinions. Feel free to discuss your views in the comments.

In general, Artificial Intelligence (AI) has been more of a commonplace term than Machine Learning (ML). But in the past few years, ML started generating more interest even overcoming AI in terms of search term usage on Google (https://trends.google.com/trends/explore?date=2009-05-28%202019-06-28&q=Artificial%20intelligence,Machine%20learning).

No alt text provided for this image

As shown above, both terms have been increasing in popularity and as of late Machine learning has been gaining more momentum that Artificial intelligence.

Artificial intelligence and its goals

AI has been around for quite a while in the Computer Science space and has matured quite a lot since. AI, put very simply is an attempt in encoding or at least mimicking natural or human intelligence in computers or machines.

So far that attempt in encoding or mimicking intelligence in machines has been mostly dismal, that is if we compare it with the breadth of natural intelligence (e.g. human intelligence). There are certainly many examples of machines outperforming humans in certain tasks. Such examples are very narrow and domain-specific, if we, however, look at the breadth of natural intelligence AI simply doesn't compare. An assembly robot using AI to intricately assemble car parts with amazing dexterity would fail at making a mere cup of coffee - at least not unless it is reprogrammed to perform the new task. Furthermore, you better not change the type of coffee machine it uses because it would need to be reprogrammed for that new task as well. In this case, human intelligence wins by a huge margin because we can quickly manage to apply our knowledge intelligently to different and even completely new challenges.

This shortfall in AI (at least in its current state) brings us to the holy grail of AI, which is Artificial General Intelligence (AGI). AGI is all about getting AI to a state where it can compete or even supersede natural human intelligence. As a side note, most science fiction references to AI deal directly with AGI. This is also where Asimov's laws of robotics come into play to try and curb the dangers of having AGI.

The main problem to be solved if AGI is to be achieved is how we can get machines to transfer or use the same intelligence across different domains. If or once that hurdle has been overcome substantially, this suddenly means machines with intelligence will not just be intelligent in one domain but across multiple domains. Suddenly, your assembly line robot can also be your housekeeper robot and vice versa.

Applications and origins of AI

One of the oldest and most major contributors to the progress of AI is computer gaming. Technology, as we have it today, has been mostly driven by metaphors. We are naturally biased and we end up building things which in some shape or form are metaphors to existing concepts or constructs. The same is true for computer gaming.

When the computer games where being built for the first time, naturally some of the games were single-player games while others were multi-player games. As computers had become very personal (PC), naturally multi-player games had to be adapted somewhat: How do you play a multi-player game by yourself? In other words, how do you play a game against the computer? The answer to this was to somehow encode some game-play behaviour in the game - which in a way mimics a rudimentary type of intelligence, an artificial intelligence.

The first game with a computer opponent was Pong by Atari and it was released in the 1970s.

Atari's Pong

The image above shows the design of the original Pong game.

AI within the gaming has evolved by a massive margin since the days of Pong which included a very static scripted rule-based approach for the computer opponent. Nowadays, modern games have complex and dynamic non-player characters (NPCs) and computer opponents that use advanced AI to give a much richer experience. In achieving such feats, computer gaming certainly played a huge role in pushing the envelope of AI.

While computer gaming was one of the first industries to use AI it is not the actual origin of AI. AI was born within academia and was first applied to many different problems such as coming up with game strategies (e.g. chess) and solving logic and algebraic problems.

Machine learning

Machine learning (ML) is all about solving problems by learning trends and patterns based on data as opposed to explicitly encoding the steps/algorithm/functions needed to solve the problem. In a nutshell, we let the machine learn how to solve a problem as opposed to translating a known solution into 'machine language' and having the machine execute that solution. In this case, the only algorithm we need to worry about is how do we learn, once we have that algorithm we feed the machine with learning material.

For example, in the previous example of Pong, if one was to encode the rules of the computer opponent (e.g. if the ball's angle will lead the ball to the top corner move to the top) that would NOT be ML. If we, instead, show the computer data of previous games in terms of which best action was done in which situation and then let the computer learn which actions to take when - we then have ML.

There are situations were ML is appropriate and where it is not. If we have a very fixed manageable set of rules ML might not be the best approach, but if we instead have a very highly unstructured and ill-defined problem we might then opt for ML. For example, creating a program to do arithmetic (i.e. a calculator) would be much easier done in a traditional explicit way. The rules of arithmetic are well known and can be easily encoded so it wouldn't make sense to first try and learn those rules. In actuality, you can indeed write an ML algorithm that learns how operators (e.g. + or -) work based on examples, but that would be overkill and likely to not be accurate. On the other hand, something like face detection can be much better handled using ML. Trying to explicitly write all the rules that define what an image of a face looks like would be a very impractical task. Besides the fact that people are different, you also have to worry about things like lighting, angles, distance from the camera, facial expressions and accessories, glare etc. In that case, it would make much more sense to have the computer learn the from data what a face is - ML is very fitted for such a task.

ML is also very good at eliminating certain biases (keeping in mind that ML will inherently include biases that are encoded within the data). For example, an economic model can be derived by an economist by using very well researched knowledge about how different economic indicators affect each other in a certain context. The derived model can be encoded into a computer program using traditional explicit means (i.e. without using ML). If we instead take an ML approach and simply feed the machine with historical data we can uncover previously hidden trends, patterns and relationships between different economic indicators - this is because the machine will be blind to any biases that could have been introduced by the economist's model. This doesn't mean that the ML model would be absolutely better than the economist's model; As mentioned above ML would have introduced its own biases from the data. For example, if the historical data used to train the machine was from a period during the global economic depression, if you try to apply the ML model when the global economic depression is over the model is likely to be very inaccurate.

The difference between artificial intelligence and machine learning

Given that artificial intelligence is all about encoding or mimicking natural intelligence, then machine learning is a learning approach to achieving the same goal. Which makes ML a subset of AI that solves AI by learning as opposed to using explicit rules.

Machine Learning is a learning-based approach to Artificial Intelligence

Machine learning is a very crucial part of Artificial intelligence as it is currently the best bet at achieving the holy grail of AI - Artificial General Intelligence (AGI). As mentioned above when discussing AGI, one of the main characteristics of AGI is having cross-domain transferable knowledge. There is a flavour of ML called Transfer Learning that deals explicitly with using ML to learn how to solve one problem but then also transferring those learnings to solve a different problem (i.e. a problem which that machine had not initially learned to solve).

Going back to computer games. When building a computer opponent you can do it in two ways:

  • A: Explicitly encode all the rules (static or dynamic) in such a way that the computer has a set way of responding to a myriad of different game states. E.g. in the game Pong: Rules which tell the computer to either move up, down or not move at all.
  • B: Have the computer learn or discover the optimal game behaviour by 'looking' at many examples of games which have been played (keeping in mind that the data would have to somehow indicate whether or not certain moves are favourable or not).

Both are examples of Artificial Intelligence (as intelligence is being encoded/mimicked) but only B is Machine Learning because it takes a learning-based approach.

This is, of course, a clear-cut example of the difference but many grey-areas do exist. There are also schools of thoughts that do not see Machine Learning being strictly a subset of Artififical Intelligence. In my opinion, that school of thought is not necessarily wrong, however, defining ML as a subset of AI is the most practical and mostly accurate standpoint. This is the same way Newton's laws are pretty accurate and practical for everyday use even though they are not absolutely accurate (for example they fail for very small things - hence the field of quantum mechanics).

No alt text provided for this image




Palota is a company that designs and develops innovative digital products. Let's collaborate, contact us on [email protected].

https://www.dhirubhai.net/company/palota/

?https://palota.co.za


要查看或添加评论,请登录

Kholofelo Michael Moyaba的更多文章

社区洞察

其他会员也浏览了