AI and the Future of Work
Iain Moffat
Helping organizations to go digital while staying human. Chief Delivery Officer at MHRGlobal and Podcast Host at Podcast: Leadership, the future and tea
This is the first in a three-part series of blogs published in conjunction with our AI and the Future of Work white paper, which you can download from the link below: https://www.mhr.co.uk/blog/ai-and-the-future-of-work-1/#request-download
“Fifteen hundred years ago, everybody knew the Earth was the centre of the universe. Five hundred years ago, everybody knew the Earth was flat. And fifteen minutes ago, you knew that humans were alone on this planet.”
If you’re wondering where you’ve heard that quote before, it’s from Men in Black. Whether or not you liked the film, it offers a good illustration of how:
1) change keeps accelerating, influenced by technology;
2) our views can be drastically altered over time by new information; and
3) sometimes the shift in that view can be dramatic, turbulent and take time to settle.
A bit of history, a bit of philosophy
Artificial Intelligence (AI), or rather the concept of man-made objects taking on human-like cognitive abilities is not new. While you may think AI was born out of the latter part of the 20th century, its history actually stretches back hundreds of years. In the middle ages mystical societies and alchemist movements were cited as achieving the placement of “mind” within matter. In parallel to these mysterious goings-on, the Chinese, Indian and Greek philosophers had undertaken an even longer history of mechanising reasoning.
The concept of all rational reasoning being open to mechanisation is an ancient idea indeed.
However, the processing power – the resources necessary to achieve even a fraction of utility for humanity – has taken some time to emerge. The birth of computer science through the 19th and early part of the 20th century provided academia, industry and the military with increasing capabilities to replicate computation and rational logic. This body of work emerged in the1950s as the specific area of Artificial Intelligence.
In all cases, the artificial component focusses on rational logic.
If we look at how us humans work, how we think and decide to take action, there is a lot more going on. Dan Ariely has written a series of excellent books on the subject of what motivates us and forms our decision-making. If you haven’t already, I would urge you to read Predictably Irrational in particular.
Mechanisation is something humanity has been working on for a very long time, and, over the long-term, has benefited all of us.
So what’s all the fuss regarding AI?
AI holds much promise; it has the power to fundamentally change perceptions around work, life and society – and with such change comes both threats and opportunities.
The confluence of increased computing power, big data, the accessibility of devices, and our willingness to share information and experiences means machines can learn from a wealth of data, patterns and outcomes.
The movement to machine learning from mere automation is significant. However, AI is currently weak – it can perform specific operations aligned to particular problems, but it is inadequate when it comes to general learning and adaptation to new areas. Strong AI is currently limited in its availability and is restricted to all but the largest, most complex and isolated computing platforms.
For the majority of us, access to AI in the next five to ten years is likely to be restricted to the weak variety only.
While we may pride ourselves on being up to date with the latest tech, this is generally based on personal use. Technology in the workplace is usually a generation or two behind personal use. So while most of us are happy talking about how we use the latest gadgets in our personal lives, when we turn our attention to technology in the workplace we are often less than comfortable.
Technology in the workplace has a dramatic effect on our work productivity; however, it is often met with significant change management resistance, with introduction teething problems, and – in the longer term – fears around job displacement.
The vast potential of AI and its application in the workplace, coupled with the unknowns around when this new tech will be adopted and where it will end up, is driving the partisan views of either great opportunity or great threat.
The potential for AI to mature quickly through self-learning is a compelling proposition. But AI is about rational, logic-based activities – it is ideally suited to both transactional and highly structured work, as demonstrated by its introduction into areas such as customer support as well as within the legal profession around discovery work.
One thing is clear: AI has the potential to become super-intelligent and super-competent.
If we consider how we have all learnt during our lives – from our parents, mentors and peers – actually having a super-intelligent and super-competent resource available to us would be a fantastic opportunity. However, that intelligence and competence is only beneficial if both the provider of those resources and the receiver are working towards a common goal.
An AI with your goals in mind is a positive, compelling proposition indeed. But how good are we really at setting goals?
Consider this: what if I asked my AI assistant to help me achieve the goal of “getting me to lose 16lbs before Christmas.” There is a multitude of ways of achieving this goal, only some of which will leave me alive, healthy and in one piece.
Now consider this: I ask my AI assistant to “get me to Jack’s party at the other side of town for 16:00,” then in the middle of the day some of my priorities change, and I get delayed. If that goal isn’t cancelled, updated, or informed by fast-moving priorities, the journey may end badly.
Setting clear goals isn’t always easy with our human colleagues, but with AI it requires a whole new level of precision.
In the next two blogs, we will look at the arguments for and against AI. But before we finish this blog, I would like you all to consider the small set of questions below:
- How far away are we from strong AI (i.e. general and powerful learning machines that can adapt) being used in our day-to-day lives?
- How far away are we from seeing weak AI used in our day-to-day lives?
- How do we think these technologies will affect our work and leisure lives?
- Do we foresee threats or opportunities?
- How well do you think you set goals?
- How good do you think you’ll need to be at goal setting in the future?