The Future of Work (Part 1)
There's been a huge amount of coverage in the business press about the imminent flood of artificial intelligence and advanced robotics into the workplace and the subsequent displacement of human jobs as a result.
Is it a bit premature for us to be thinking about a future where the machines have taken over? After all, we've been working alongside technology for many years, but our working hours are basically unchanged, and there’s little concern about us reaching Keynes’ predicted 15-hour working week.
Or is it hubris of the highest order to dismiss these concerns? We’re currently living the economic and political consequences of changes in work in the western world due to globalization – is ignoring the disruption that wide-scale automation could bring extremely shortsighted?
If the threat from automation plays out as many believe, there are a few key reasons why the next iteration of workplace innovations could be very different to how people have incorporated technology into their jobs in the past, and it could fundamentally change our existing ideas of work and reward.
Technology & Labor - A Potted History
Humans and technology have long gone hand-in-hand – and with the modern smart phone I mean that quite literally! Whether it’s our distant ancestors using a bow-and-arrow for more effective hunting, a steam engine to transport goods further and faster, or a supercomputer to run simulations of the early universe - these are variants of the same broad theme – tools and technology help us be more efficient.
As is their wont, economists have a model for the interaction between people, technology and economic output - the Solow-Swan Model. Although simplistic, this model highlights one of the key assumptions on which we’ve relied in our historical relationship with technology - that there is a requirement for human labor and the tools don't do anything on their own. The bow-and-arrow might help you kill dinner, but it doesn’t chase it for you.
The last time we saw technology threaten to disrupt the workplace on a similar scale to now, was probably the industrial revolution. With the haze of historical distance, we tend to think of this period as a boom of rapid innovation, but it actually played out relatively slowly.
Take the textile industry; one of the key pillars of the industrial revolution as production was transformed from a small-scale cottage industry to a mechanized industry over the course of around 70 years - the span of several working lifetimes.
This is another key assumption from our historical relationship with technology - change happens slowly. When change is gradual we are able to adapt and accommodate - you can either learn the skills to master new machinery or rely on transferable skills to get alternative employment.
So what happens when these key assumptions from the past - that human labor is required to create economic output and that change happens slowly – are challenged?
Removal of Human Labor
Up until very recently, computer programs and robotics found in the workplace have been designed with specialization in mind – they have a narrowly defined range of inputs, processes and outputs.
Take for example, the type of automated robots used in the car manufacturing industry. You could not take the robot that applies the final paint job to the car and get it to weld the axle without some serious and costly modifications. This is where humans have been able to retain a distinct advantage – we’re better at tasks which have variability or require some contextual decision making.
In this respect, the ‘blue collar’ technological threat comes from advances in robotics that widen the applications from specific to general tasks. You may have seen the videos produced by Boston Dynamics to showcase the capabilities of their robotic workforce. I don’t think it takes too much of a leap of logic to see these machines, or a subsequent iteration, replacing large numbers of manual workers in warehouses or factories.
The ‘white collar’ workplace is not immune to technological advances either. Computers have traditionally been used for simple or repetitive tasks – specialized software for specific purposes. But recent advances in AI and its use in the real workplace - including reviewing commercial mortgage terms and medical insurance claims show that through machine learning, computers are getting much better at acquiring knowledge and general problem solving skills in a similar way that humans do. In fact, recent changes in Google Translate show that machine learning can be more creative and efficient than human learning.
In the past we have relied on the assumption that human labor was required to hunt for food, or create widgets. But with recent advances in robotics and AI, these widgets could reasonably be produced, stored and transported with no human labor involved at all and with the whole process managed and optimized by AI. And what do the displaced human workers do then?
I guess we just go and get new jobs, right?
Pace of Change
In the past, the pace of technological change has been relatively slow. By ‘slow’ I mean that humans have had sufficient time to learn the new skills required to operate the new technology or, in the case that they have been entirely replaced, that they have able to find new employment through a combination of transferable skills and some new specific ones – ideally more productive, higher paid and enjoyable employment.
There is an opportunity cost of time and money associated with humans acquiring new skills. In general, people are willing to incur this cost with the implicit understanding that it is ‘worth it’ in the long run – i.e. that in the future, these new skills will earn them more than they have forgone to acquire them.
So what happens when the pace of change is so fast and the scope so wide, that you are not able to recoup the opportunity cost of acquiring new skills?
Say for example, 10 years ago, you took the decision to pursue study in languages, including incurring student debt while you pursued a degree, maybe spending time living in a foreign country to really immerse yourself and refine your skills to a high standard.
Then Google launch a new version of Translate, complete with neural net machine learning. People are now able to obtain professional quality translation for free from the smart phone in their pocket, completely undermining your ability to reap rewards from the investment you made in your own skills.
With technology threatening numerous parts of the production chain across both ‘blue collar’ and ‘white collar’ industries, the example above potentially extrapolates across a huge range of jobs and industries. Can we reasonably expect people to incur the opportunity cost of learning skills when there is a reasonable likelihood that they will never realize the rewards?
So what do we do?
It’s clear that new technological developments stress-test our current assumptions about how technology and people interact to make things and provide services. This will also provoke deep changes to attitudes to learning and work – as a result the incentives and rewards need to change as well.
In Part 2, I will outline some of the potential changes to reward structures that could arise as a result of widespread technological disruptions in the workplace.
Director at YOU Asset Management
7 年Very nice. Looking forward to part 2. Less looking forward to seeing how policy embraces (or fails to) these concepts over the coming years.