Worried about AIs taking your job? Here are four things that AIs can’t do better than humans.
Duncan R Shaw
Helping business leaders and MBAs transform their businesses using emerging data technologies | Online and face-to-face courses | Consultancy | Data strategy | AI | IoT | Big Data | Business Ecosystems
30% per cent of jobs in finance and insurance, and 50% of all clerical roles could be replaced by 2029 (FT.com).
And virtually every job will look very different than it does now. This degree of skills redundancy and change is deeply unsettling for anyone trying to plan their career.
The problem is made all the more unsettling because the focus in the news is mainly on what AIs can do. And that indicates that even skilled roles like accountancy and law will be largely redundant. The solution, whether looking at future investments in tech and people, or planning our own individual career and CPD paths, is to turn this question on its head. Instead of becoming overwhelmed by the myriad of things that AIs can do, the more useful question is: What is it that AIs cannot do?
Don’t just focus on what AIs do better than humans. Focus on things that AIs find hard to do and then develop your skills in these areas.
There’s been a lot of interest in how AI and machine learning technologies have recently become really good at spotting patterns in data. AIs have started to outperform humans in quite a few areas. This has led to a few worries about AIs taking away jobs as these new technologies refashion how organisations work and the skills that we will need.
But what if we looked at this the other way around? What if we looked at the things that AIs are not good at? Which activities will AIs not be able to do for quite some time?
We know that AIs are very specialised, they depend upon the correct training data and they do not understand the broader context of their tasks. This means that there are certain things that they find hard or cannot do at all. In fact, there may always be some things that we cannot teach AIs to do.
For example, AI systems do not have any real intuition, they are not genuinely creative, and they are somewhat lacking in soft skills.
Here are four skills that AIs may never be able to do. Maybe you should start to hone these skills and become known in your firm as being a guru in them?
Skill 1: Intuition – instant, effortless insight, seemingly out of nowhere
AIs are very good at predicting things based on data which records events that have happened in the past. They use repeating patterns of past events which are likely to be followed in the future. This seems like intuition, but it only works for situations that AI have been trained to deal with.
New events like Black Swan situations or even new permutations of past occurrences are impossible to spot in this way. AIs cannot predict new events which are not part of the pattern that they are following. If an event was not covered by the training data which produced the pattern, then it cannot be predicted.
For example, an AI cannot be trained to recognise a completely new disease until there is training data which includes symptoms of that disease. Training data which is also labelled with that disease’s name.
Intuition is knowing something without thinking it through or knowing why you know it. It’s an example of Nobel prize winner Daniel Kahneman’s System 1 thinking. When humans use System 1 thinking it is fast, almost effortless, automatic and instinctive. We also use another mode of thinking, which he called System 2 thinking. This is where we cautiously and logically think things through.
Intuition is System 1 thinking and in humans it’s based on the vast detail of our personal experience. Experience which is held within and influenced by the functioning of the neural networks in our brains.
But AIs do not have the richness and coverage of a lifetime of experiences. They do not have millions of years of evolution to optimise how these experiences are recorded and processed. They just have training data and software, some of which emulates a brain’s neural networks in a simplified way.
For example, humans get the feeling that something does not add up, or we hold patterns from our past experiences that tip us off that things might be worth checking into, in a bit more detail. AIs just carry on until they hit their limits.
Intuition helps us see patterns based on the sum of our experiences, which are way outside of any specific problem. But for an AI “experience” is only as broad as the data it is trained on.
The patterns that neural network AIs form with their software are a bit like System 1 thinking because they are simplified but useful relationships found within vast training datasets. They are “cheat sheets”, much less complicated than the training data but still helpful.
The problem is though, these patterns can only suggest what they have be trained to recognise from limited training data. Whereas humans have much broader training. We are trained by the everyday experiences of our whole lives.
Skill 2: Creativity – original ideas and solutions not recycling
Creativity requires imagination to invent something new and original. But AIs cannot invent they can only recycle.
It maybe that human creativity is also just reusing our past experiences and patterns of knowledge. After all new ideas must come from somewhere. Maybe the root of creativity is insights stolen from our past experiences, misremembered sensations and even random neurological noise. Certainly, different experiences give us a new perspective on a problem, which can help us to generate new ideas.
Like with intuition, human creativity benefits from us have much more training data than any AI. We have more patterns of experience, and we form those patterns in different ways to AIs. We hold all this in our neural networks, and it covers more extensive situations than AIs are trained for. Situations which interrelate and provide connections, which generate new perspectives and ideas.
For example, a feeling from playing with Lego bricks in childhood might suggest ways to fit solutions to problems in a social problem at work. Remembering the progression of music through an old pop song might lead to new ideas about how to design the look of a new product.
A rich lifetime of past experiences gives humans powers of serendipity that AIs do not have. We get ideas for solutions from experiences that are completely outside of any specific problem domain, or the training we had to solve it.
Skill 3: Critical thinking – powers of complex reasoning not following patterns
AIs are brilliant at handling complicated datasets. But as soon as a complicated problem grows multiple dependencies it turns into a complex problem, which AIs find tricky. A complicated problem has lots of parts, but they do interrelate very much. So, the problem’s overall structure is relatively simple and stable.
A complex problem can have few parts, but if they interrelate a lot then these links create multiple dependencies. If “this” depends on “that”, which depends on “something else” then it is a complex problem. A complex problem has chains of dependencies that generate unmanageable numbers of “what ifs”. It’s the number of “what if” permutations that makes complex problems impossible to model in a useful way.
For example, look at driverless cars. There are way too many dependencies, too many “what ifs”, maybes and important relationships to plan for. Driverless cars don’t just need to recognise objects, they need to recognise objects from any angle. And some of these objects need to be avoided, some need to be aimed for and many will be under the control of panicky and fallible humans. Some objects that need to be avoided will turn into objects that need to be followed, but not too closely and not for ever. It all depends …
AIs do not use complex reasoning or critical thinking to find patterns in their training data. The people who wrote the AI software do that, and every new complex problem needs these software developers to do it again. Although some firms like Vicarious are working on general AI software which uses higher-level conceptualisations of problems to give it much more flexibility.
After humans intuitively notice a problem in the first place, we use complex reasoning and critical thinking skills to study it, define it and try to solve it. These skills include analysing and interpreting data or explaining things based on evidence. They give us the ability to interpret ideas and information, to attribute causes and to plan and evaluate recommendations. Key critical thinking skills also involve verbal reasoning and the ability to decode and interpret the underlying meaning within language. For a summary of practical lessons from critical thinking research, see here.
AI software cannot yet use complex reasoning or critical thinking to develop a fundamental understanding of a problem. The software developers still do that. Current AI technologies mostly use “bottom-up” techniques to sift lots of data and look for patterns, patterns which are useful for very specific problems and are not very flexible.
Skill 4: Empathy and human judgement – personal understanding gives us the authority to offer personalised advice
Before I became an academic, I worked for Motorola. After a year or so they promoted me from being a type of engineer to being a manager. So, I had to develop my soft skills. My MBA had covered things like communicating, working in teams and managing people, but it was still a big jump.
I learnt that getting the best out of people means listening to them, understanding how they feel, seeing what they need and gaining their trust. But AIs cannot understand human emotions, they have no empathy.
AIs can appear to turn on their sensitive side and they can even spot some emotions from the tone of peoples’ voices. But they are only matching current and past circumstances and then acting as they were told to. This approach will not work for situations where there is no prior match or no previously set actions.
Empathy enables us to do two things. The first is to recognise emotions, their causes and their implications. And frequently we can do this from having been there ourselves. We understand the personal context of emotions, how the individual situation of the person experiencing the emotions influences the emotion and what to do about it.
The second thing that empathy enables us to do is to be credible enough for our recommendations for actions to be followed.
AIs never experience emotions themselves so they cannot share and understand a person’s feelings. In effect this is training data that they can never get access to. AIs only get trained on the external results of emotions. Not on how it feels internally and how these feeling drive seemingly unrelated decisions and actions.
There are several implications of this. First, emotions are uniquely personal. In addition to our mental states, they are based on individual personal circumstances. And these are too unique be fully recorded in training data. Which aspects of a person’s past life and current environment are significant enough to be recorded? So, for a long time, AIs will only be able to recognise and act on the very obvious and high-level emotions.
Second, empathic advice from an AI lacks credibility … unless the AI pretends to be human. How can an AI encourage people when part of that encouragement is confidence in the authority of the advisor? Inspiring the troops, empathizing with customers and developing talent all require human credibility.
And finally, of course, there is the old point about correlation not causation. Data which records the external results of emotions provides patterns of associations, not patterns of causes. Correlation cannot say why something happens. So without empathy there is no way to understand the root cause of emotions.
Human limitations compliment AI limitations
The bottom line is that human intuition uses much wider training data than AIs have access to, and human creativity draws on training data which might seem unrelated to the problem.
Also, humans use complex reasoning and critical thinking as a reflex, to understand and to evaluate why something is as it is. But AIs need to be directed to do this.
Lastly, humans have their own emotional experience to draw on to recognise problems and to engender trust and confidence, when they give advice.
The fundamental ways in which most AI technologies work limit their abilities to emulate these four skills. These limits come from the methods of manufacture and preparation of training data, the techniques of finding patterns in that data and the ways that those patterns are employed.
My discussion here is mostly aimed at neural net AIs rather than other types of machine learning, but the basic ideas about limitations in training data and how that training is designed apply to most of the AI technologies that we use right now.
However, humans also have fundamental limitations in their abilities to gather, process and interpret information. They are just different limitations to AIs’ limitations.
So, there will be a need for complimentary human capabilities for some time. But you will need hone these four uniquely human skills.
Duncan is a lecturer at Nottingham University Business School. He also advises organisations on creating value with digital data and he writes in his own blog.
Connect with me on LinkedIn at www.dhirubhai.net/in/duncan-r-shaw-7717538.
Professor, Consulting and China account rep
4 年There is recent work being done to use AI to solve chaotic systems (such as prediction of how s glame flickers). This ability is part of something I call Automation 4.0 when machines, through the use of 5S, edge and cloud computing will be able to solve chaotic and stochastic systems.
Head of Marketing at chl mortgages
4 年Duncan R Shaw, great read thank you. A question, what should be done at school level to ensure that kids are well-prepared to take advantage of the AI technology now and in the future ?
PROVEN & TRUSTED B2B Design Partner to the UK’s top brands ★ BRANDING ★ UI/UX DESIGN ★ DIGITAL ★ PRINT CeneyDesign.co.uk
4 年Great article on AI Duncan R Shaw thank goodness creativity and intuition are ‘safe’ ?? In my design work, ideas sometimes come out of nowhere when I’m not even consciously thinking of my clients design ‘challenge’ and I intuitively know it will work.
CEO/Founder @ Concept Zero | Joining concepts, ideas, thesis, and solutions together as we journey from, or to zero | Dyslexic Thinker |Advisor | Challenger | Sustainability Advocate, Business Strategist
4 年AI needs humans to even get it off the ground by providing good quality training data. AI will help more people realise how being human and embracing all the skills, knowledge, experience you highlight is more interesting than what the AI can do. Humans will continuously need to hold the AIs hand and monitor and manage it's actions, decisions and training.
Results driven marketing, branding and communications specialist | any brand story teller |brand director | marketing director
4 年Good read Duncan!