Understanding Intelligence: Humans vs Machines - Part 2
In the previous article I discussed two different types of human decision making processes. System 1 is always-on, experience and memory-based and instinctive, while System 2 is requires concentration and calculation. In this article I will discuss their artificial equivalents, and some of the questions raised.
Procedural Programming
When considering human decision making systems, it is clear that System 2 is what we would historically consider to be the 'higher' form of intelligence. It requires dedicated study and relies heavily on the transmission of prior learning from one generation to the next through books and education - things that no other animal has ever been capable of.
However in IT there is a general perception that the 'System 2' equivalent, procedural programming, is the less advanced form, and there are various reasons for this. In procedural programming we achieve results by developing algorithms in which a series of pre-determined steps are applied to knowledge (data) to produce an outcome. When an appropriate situation arises we call on the procedure, feed in the relevant parameters and get our answer.
Computers have some huge advantages over humans when it comes to System 2 - they don't get tired, they don't get distracted, and they never forget either a fact or a step in process. The main problem is that in order to produce an effective program, a human has to first develop an effective algorithm. This is an approach well suited to 'closed' problems, where we can control all of the variables, but works less well in more fluid circumstances.
If we don't have an appropriate procedure available to use or don't have the required data, then the programme will not produce useful outputs.
Procedural programming is limited by the programmer's understanding of cause and effect and our ability to create algorithms that accurately reflect the nuances of a complex world.
Pattern Recognition and Machine Learning
But what about System 1? This is prima facia akin to a database search - we feed in a broad set of parameters, check them quickly against a list of prior examples and return a match or not. But that analogy doesn't quite fit, because System 1 also makes judgements about situations it hasn't exactly encountered before, but where it recognises key similarities to previous situations.
System 1 is actually very analogous to machine learning (alternately known as data mining, artificial intelligence, data science etc.). In Machine Learning we apply an algorithms to a training data set that apply mathematical and statistical methods to 'learn' the key elements of patterns that lead to particular outcomes. The outcome is a 'model' into which we can input new data to predict the relevant outcomes.
Essentially with machine learning/AI we are using a procedural program to create other new procedural programmes that we could not have easily created ourselves. In doing so we sidestep the need to rationally determine a definite correct answer based on understanding of the causal mechanisms, instead relying on an 'AI' system whose operations we usually do not fully understand, but which we know from experience is generally trustworthy - much like our own intuition.
Again, computers have advantages when it comes to this type of learning an decision making - primarily that they can absorb and analyse much more information than people, and can do so more reliably and more accurately. Computers are also not naturally vulnerable to typical disruptive 'System 1' disruptors like bias and over-confidence (although they are vulnerable to incorporating biases present in the training data).
As most human decision making is 'System 1' based, this kind of AI might actually outperform human 'System 1' thinking in an ever increasing number of situations. Indeed there is already evidence that machine-learning based systems produce more accurate results than humans in rapid judgement-based processes like CV screening and parole hearings.
The accelerating transition of human activities like shopping and dating to the digital realm has lead to the production of huge volumes of data about human behaviour that has never been available before. There is huge scope for the application of machine learning to this data and these activities to create solutions that were previously impossible.
'Chat-bots' of yore used procedural logic to develop answer-response simulations in attempts to pass the Turing test. Now they use AI models and the records of billions of actual human conversations to generate realistic, dynamic speech that will fool most casual observers.
True AI?
There are many situations which are so complex, or in which the data set is too large to be analysed by a human, or in which key information is absent, that it would be impossible to produce an effective solution by traditional procedural programming. In these cases applying machine learning techniques to develop a memory- or pattern-recognition-based decision making model is an appropriate alternative, but can these solutions really be called intelligent?
In many ways, yes they can, however they are a long way short of human intelligence, which, for the moment at least, has a depth, breadth and flexibility that no machine comes near to.
In addition the really valuable 'hard' thinking that people do is in coming up with the rules to facilitate System 2, or in deciding to ignore their conditioning and training to cope with new situations. It is true that not many of us will come up with important mathematical formulae, but we all come up with our own solutions to the problems of everyday life.
A frequent comedic trope in movies about AI is when the android character who can calculate PI to a million digits can't figure out how to handle an everyday situation, like amusing a bored child.
Computers can consistently outperform humans at complex tasks like playing chess. Given the rules of a game, systems like Google's DeepMind can actually teach themselves the best ways to play, but even the best computers would be stumped by a new game that they have not seen before and for which they do not know the rules. By contrast a human can work out the rules of a game by observation - i.e. we do not merely observe and mimic behaviours, we intuit reasons for those behaviours, and can then apply the rules to future situations that we have not yet encountered.
In order for true AI, we would need computers to assess the available information, develop and test theories and ultimately develop their own reliable combined causal models and procedural solutions. This may happen one day, but we are still a long way from it.
Conclusion
AI and machine learning programs are not 'intelligent', but they are very cleverly designed tools, and their increasing popularisation represents a paradigm shift of potentially enormous magnitude. Electric light and light bulbs had been in use in isolated situations for around 70 years before Edison came along and popularised them to the point that 'only rich people could afford to burn candles'. The general principles of machine learning have been around for an equally long time, but are now reaching the tipping point where AI has similar potential to disrupt and energise society.
In the next part, I will discuss different types of AI, discerning between Strong and Weak AI, and looking at where the cutting edge lies at the moment and what the future holds.
Questions on Data Warehousing, Data Integration, Data Quality, Business Intelligence, Data Management or Data Governance? Click Here to begin a conversation.
John Thompson is a Managing Partner with Client Solutions Business Intelligence and Analytics Division. His primary focus for the past 15 years has been the effective design, management and optimal utilisation of large analytic data systems.
Food for thought, great read John
Great follow up John.?